114:00 — First-order data-driven distributionally robust optimization

During the era of big data, Sample Average Approximation (SAA) became prominent as a data-driven technique for approximating stochastic optimization problems. This is partly because SAA lends itself to efficient solutions through first-order methods. Nevertheless, when the size of the dataset is limited, SAA is susceptible to overfitting. Recently, distributionally robust optimization (DRO) based on Wasserstein-type ambiguity sets has emerged as an alternative data-driven method capable of mitigating overfitting. However, it generally entails a higher computational overhead compared to SAA. In this work, we aim tackle the computational roadblock of applying these data-driven DROs by demonstrating their compatibility with first-order methods. Specifically, we show that one- and two-stage optimization problems whose feasibility does not depend on the data realizations are amenable to first-order methods, in both the offline and online settings.
Furthermore, we introduce novel probabilistic guarantees for implementing the online version of Wasserstein-based DRO. In cases where solution feasibility depends on the data realizations, we propose a first-order methods which relies on approximating the feasible set. We validate the effectiveness of our suggested methods through numerical experiments.

214:30 — ** CANCELLED ** New algorithm for the difference of convex functions optimization problem

Maximizing the difference of 2 convex functions over a convex feasible set (the so called DCA problem) is a hard problem. There is very large number of publications for solving this problem. Most of them are variations of the widely used DCA algorithm. The performance of this algorithm, for reaching a good approximation of a global solution, depends crucially on the choice of its starting point. We introduce a new algorithm (NDC) which is notoriously different then DCA. A major effort in NDC is the generation of a good starting point. This is obtained by using COMAX, an algorithm for maximizing globally a constrained convex problem. In phase, one of COMAX a starting point is derived by approximating the objective function with a specific quadratic function (Taylor expansion around the MINIMIZER of the max-convex function) and then approximating the convex feasible set by an ellipsoid. The result is a, so called, "hidden Convexity problem" which can be easily solved globally. The optimal solution is a basis for obtaining a good starting point for a standard phase 2 gradient ascent algorithm converging to a candidate for the global maximum.
The performance of the NDC algorithm is tested numerically, and the results are compared to the performance of the classical DCA.

315:00 — Towards Watermarking Tabular Datasets

Tabular datasets play an important role for many industries including financial, healthcare, and insurance among others involving machine learning tasks. We propose two novel approaches (with distortion and distortion-free) for watermarking tabular datasets arising from generative models inspired by techniques for text LLMs. Experiments are shown to validate our approaches on various datasets and are robust to various adversarial attacks.

415:30 — Online MDP with Transition Prototypes: A Robust Adaptive Approach

In robust Markov Decision Process (MDP), an ambiguity set is constructed on the collected data. However, in many cases, data arrives as a stream, leading to the question of how to adaptively construct the ambiguity set as we collect more data. In this work, we consider a setting where we need to identify the real state-action transition probability from a finite set of candidate prototypes. We construct an online adaptive ambiguity set by updating our belief over the candidates. We prove the efficiency of this adaptive ambiguity set by providing regret analysis for the policy given by the solving the robust optimization problem with this adaptive ambiguity set. We also provide numerical results to demonstrate the empirical performances.