### 1 — 08:30 — Robust Appointment Scheduling with General Convex Uncertainty Sets

The efficient management of appointments is crucial in enhancing high overall operational efficiency and service quality in many sectors, with healthcare as leading example. The Appointment Scheduling Problem (ASP) considers a finite number of customers with uncertain service times served consecutively by a single server, and aims at finding the schedule that minimizes the weighted costs of customers’ waiting time and server’s idle and overtime. We address the ASP from a Robust Optimization (RO) perspective, assuming that the uncertain service times fall within a given uncertainty set and considering the worst-case costs over this set. This problem can be formulated as a complex RO problem that is convex in the uncertain parameters, and we use several techniques, such as the Reformulation Perspectification Technique, to solve it. As a result, we propose a robust framework that can handle large problem instances and accommodates general polyhedral and ellipsoidal uncertainty sets. Numerically solving the ASP with uncertain service times is notoriously hard, and previous studies using stochastic programming methods could only deal with small problem instances. Our novel RO method beats this curse of dimensionality, and can obtain optimal robust schedules for over 50 customers and good approximate solutions for over 100 customers. For specific choices of uncertainty sets, we discuss patterns in these robust schedules, often the well-known dome-shape.

### 2 — 09:00 — Connections and Reformulations between Robust and Bilevel Optimization

Robust and bilevel optimization share the common feature that they involve a certain multilevel structure. Hence, although they model something rather different when used in practice, they seem to have a similar mathematical structure. In this talk, we analyze the connections between different types of robust problems (strictly robust problems with and without decision-dependence of their uncertainty sets, min-max-regret problems, and two-stage robust problems) as well as of bilevel problems (optimistic problems, pessimistic problems, and robust bilevel problems). It turns out that bilevel optimization seems to be more general in the sense that for most types of robust problems, one can find proper reformulations as bilevel problems but not necessarily the other way around. We hope that these results pave the way for a stronger connection between the two fields, in particular to use both theory and algorithms from one field in the other and vice versa.

### 3 — 09:30 — LP-based approximations for disjoint bilinear and two-stage adjustable robust optimization.

We consider the class of disjoint bilinear programs $max \{x^Ty | x \in X , y \in Y\}$ where X and Y are packing polytopes. We present an O(loglog m1 / log m1 . loglog m2 /log m2)-approximation algorithm for this problem where m1 and m2 are the number of packing constraints in X and Y respectively. In particular, we show that there exists a near-optimal solution (x, y) such that x and y are “near-integral". We give an LP relaxation of the problem from which we obtain the near-optimal near-integral solution via randomized rounding. As an application of our techniques, we present a tight approximation for the two-stage adjustable robust optimization problem with covering constraints and right-hand side uncertainty where the separation problem is a bilinear optimization problem. In particular, based on the ideas above, we give an LP restriction of the two-stage problem that is an O(log n / loglog n . log L / loglog L )-approximation where L is the number of constraints in the uncertainty set. This significantly improves over state-of-the-art approximation bounds known for this problem. Furthermore, we show that our LP restriction gives a feasible affine policy for the two-stage robust problem with the same (or better) objective value. As a consequence, affine policies give an O( log n / loglog n . log L / log log L )-approximation of the two-stage problem, significantly generalizing the previously known bounds on their performance.

### 4 — 10:00 — Quadratic Optimization Through the Lens of Adjustable Robust Optimization

Quadratic optimization (QO) has been studied extensively in the literature due to its applicability in many practical problems. While practical, it is known that QO problems are generally NP-hard. So, researchers developed many approximation methods to find good solutions. In this talk, we go beyond the norm and analyze QO problems using robust optimization techniques. To this end, we first show that any QO problem can be reformulated as a disjoint bi-convex QO problem. Then, we provide an equivalent adjustable robust optimization (ARO) reformulation and leverage the methods available in the literature on ARO to approximate this reformulation. More specifically, we show that using a so-called decision rule technique to approximate the ARO reformulation is interpreted as using a linearization-relaxation technique on its bi-convex reformulation problem. Additionally, we design an algorithm that can find a close-to-optimal solution based on our new reformulations. Our numerical results demonstrate the efficiency of our algorithm, particularly for large-sized instances, compared with the off-the-shelf solvers.