114:00 — FASTopt: An Optimization Framework for Fast Additive Segmentation in Transparent ML

We present FAST, an optimization framework for fast additive segmentation. FAST segments piecewise constant shape functions for each feature in a dataset to produce transparent additive models. The framework leverages a novel optimization procedure to fit these models ~2 orders of magnitude faster than existing state-of-the-art methods, such as explainable boosting machines (Nori et al. 2019). We also develop new feature selection algorithms in the FAST framework to fit parsimonious models that perform well. Through experiments and case studies, we show that FAST improves the computational efficiency and interpretability of additive models.

214:30 — Reliability evaluation of gaussian processes applied to power systems

Accurately simulating the electrical grid is crucial in power systems. However, it is also very time-consuming, as the entire electric network must be taken into account. Thus, reducing computation time is crucial. To this end, the use of proxies is garnering growing interest in the community. Indeed, it enables the approximation of some simulation results to reduce the overall computational cost. An increasingly popular proxy technique for learning unknown dynamics is Gaussian processes. These processes are theoretically able to learn any black-box function through a statistical model based on local output correlations. Gaussian processes also enable the incorporation of prior knowledge on the system's physic into the proxy and provide confidence intervals around its prediction. They are particularly suitable for critical systems, where uncertainty quantification is essential. However, like all machine learning algorithms, Gaussian processes can sometimes learn poorly leading to inaccurate predictions and erroneous uncertainty quantification. Before trusting the confidence interval provided with the prediction, it is essential to ensure that the learned model is reliable. Our work aims to present a methodology to evaluate the reliability of Gaussian processes, and avoid the severe consequences implied by trusting a flawed model.

315:00 — Closed-loop Koopman operator approximation

This presentation discusses a previously published method to identify a Koopman model of a feedback-controlled system given a known controller. The Koopman operator allows a nonlinear system to be rewritten as an infinite-dimensional linear system by viewing it in terms of an infinite set of lifting functions. A finite-dimensional approximation of the Koopman operator can be identified from data by choosing a finite subset of lifting functions and solving a regression problem in the lifted space. Existing methods are designed to identify open-loop systems. However, it is impractical or impossible to run experiments on some systems, such as unstable systems, in an open-loop fashion. The proposed method leverages the linearity of the Koopman operator, along with knowledge of the controller and the structure of the closed-loop system, to simultaneously identify the closed-loop and plant systems. The advantages of the proposed closed-loop Koopman operator approximation method are demonstrated experimentally using a rotary inverted pendulum system. An open-source software implementation of the proposed method is publicly available, along with the experimental dataset generated for this paper.

415:30 — Novel Mixed-Integer Optimization Approaches for Interpretable SVMs

Interpretability in machine learning continues to grow in importance, focusing on creating models that offer clear explanations of their decisions for both prediction and debugging purposes. Embedded feature selection methods, which aim at selecting the most relevant features and training the machine learning model simultaneously, are particularly effective in identifying optimal feature subsets for accurate and efficient model performance.
In this talk, we will discuss two recent projects on interpretable classification using both linear and nonlinear Support Vector Machines (SVMs). We are interested in embedded feature selection models that can be formulated using Mixed-Integer Nonlinear Optimization (MINLO) techniques. In particular, we analyze the case in which a cardinality constraint is added to the standard primal and dual SVM formulations, which limits the number of features to be used in the training process of the classifier.
For linear SVMs, we formulate the problem through a novel Mixed-Integer Quadratic Optimization model with a complementarity constraint. We show how to algorithmically exploit scalable conic relaxations to solve the model for large datasets and present both heuristic and exact procedures. Regarding the nonconvex MINLO problem associated with nonlinear SVM models, which has been minimally explored in the literature, we propose a novel decomposition algorithm based on submodular function maximization which can work with polynomial kernels of any degree.
Numerical results demonstrate the effectiveness of our algorithms in solving the addressed optimization models, outperforming off-the-shelf solvers and other solution approaches. We will also hint at possible extensions of our work to additional machine learning problems.