### 1 — 08:30 — Dynamic constraint aggregation for solving the minimum sum-of-squares clustering problem

The minimum sum-of-squares clustering problem (MSSC), also known as k-means clustering, refers to the problem of partitioning n data points into k clusters, with the objective of minimizing the total sum of squared Euclidean distances between each point and the center of its assigned cluster. We propose an efficient algorithm for solving large-scale MSSC instances in two-dimensional Euclidean space, which combines column generation (CG) with dynamic constraint aggregation (DCA) to effectively reduce the number of constraints considered in the CG master problem. DCA reduces degeneracy by utilizing an aggregated restricted master problem obtained from an aggregating partition of the set of data points into disjoint clusters. Our computational results demonstrate that the proposed algorithm significantly outperforms existing CG-based approaches available in the literature.

### 2 — 09:00 — Generalized integral simplex

The set partitioning problem can be generalized. One generalization is to consider that certain tasks

must be covered several times. So it's no longer a partition that's sought, but a set of parts that verify

the right number of coverages.

Unlike the set partitioning problem, the graph of integer solutions is no longer connected when the

possible arcs are simplex pivots. Howerver, in order to restore the connectedness of the graph, it is

possible to add artificial variables locally, which allow new pivots to be made while preserving the

integrity of the solution.

We give priority to pivots that keep the solution integer. When this is no longer possible, we solve a

linear program to obtain a non-degenerate sequence of simplex pivots to perform in order to

improve the solution. When this improved solution is still not integer, in some cases the pivot

sequence allows to add an artificial column associated with an integer artificial point. The linear

program is solved again from this point to find the remaining pivots to be performed in order to

obtain a better integer solution. If integrity is still not reached, a branching is performed.

We present results for large pairing problems (around 2,000 constraints and 300,000 variables) and

show that the method is around 2 to 5 times faster than traditional methods (CPLEX) and that in

many cases, the addition of the artificial variable effectively allows moves on integer points not

directly accessible by pivot.

### 3 — 09:30 — Integral Simplex for the Set Partitioning Problem

For the set partitioning problem, the existence of a sequence of integer solutions with non-increasing costs produced by simplex pivots, permitting to reach optimality was proved by Balas, Padberg 1972. Finding the following term in the sequence can be very time-consuming due to degeneracy. When there is no entering variable that leads to a better integer solution, we use a linear programming sub-problem to find a group of variables to enter the basis to obtain an improved solution which is integer most of the time. When this solution is not integer, we solve a small integer programming problem to obtain a better integer solution.

We first, present results for large-scale problems (with up to 500 000 variables) for which optimal integer solutions are obtained, most of the time, without any branching. We also, present the integration of this new approach with column generation and parallelism to solve large-scale bus driver and airline pilot problems. Problems with many thousand tasks are solved 5 to 10 faster than with branch and price based on CPLEX.

### 4 — 10:00 — New complementary problem formulation for the improved primal simplex

The primal simplex algorithm is still one of the most used algorithms by the operations research community. It moves from basis to adjacent one until optimality. The number of bases can bef very huge, even exponential, due to degeneracy or when we have to go through all of the extreme points very close to each other. The improved primal simplex algorithm (IPS) is efficient against degeneracy but when there is no degeneracy, it behaves exactly as a primal simplex and consequently, it may suffer from the same limitations.

We present a new formulation of the complementary problem, i.e., the auxiliary subproblem used by the improved primal simplex to find descent directions, that guarantees a significant improvement of the objective value at each iteration until we reach an $\epsilon-$approximation of the optimal value. We prove that the number of needed directions is polynomial.