1 — 14:00 — ** CANCELLED ** Complexity of Mixed Integer Nonlinear Programming with Nonconvexities
We establish some complexity results for mixed integer programming in cases where there are nonconvexities describing the feasible region. This departs from most complexity results in integer programming that focus on linear or convex constraints. Nonconvexities can easily result in NP-Hard cases, even in fixed dimension 2. Our goal is to expand the class of problems that we can solve in polynomial time when the dimension is fixed, ideally in a fixed parameter tractable way.
2 — 14:30 — A Knowledge Compilation Take on Binary Polynomial Optimization
The Binary Polynomial Optimization (BPO) problem is defined as the problem of minimizing a given polynomial function over all binary points. The main contribution of this paper is to draw a novel connection between BPO and the problem of performing model counting over the (max, +) semiring on Boolean functions. This connection allows us to give a strongly polynomial algorithm that solves BPO with a hypergraph that is either beta-acyclic or with bounded incidence treewidth. This result unifies and significantly extends the known tractable classes of BPO. The generality of our technique allows us to deal also with extensions of BPO, where we enforce extended cardinality constraints on the set of binary points, and where we seek k best feasible solutions. We also extend our results to the significantly more general problem where variables are replaced by literals. Preliminary computational results show that the resulting algorithms can be significantly faster than current state-of-the-art.
3 — 15:00 — On the power of linear programming for data clustering
We propose a linear programming (LP) relaxation for K-means clustering. We derive sufficient conditions under which this LP relaxation is tight and subsequently obtain recovery guarantees under a popular stochastic model for the input data. We present extensive computational experiments on real data sets. Finally, we present extensions that incorporate various notations of group and individual fairness measures.
4 — 15:30 — Sample Complexity and Computational Results for Branch and Cut Using Neural Networks
Data-driven algorithm design is a paradigm that uses statistical and machine learning techniques to select from a class of algorithms for a computational problem an algorithm that has the best expected performance with respect to some (unknown) distribution on the instances of the problem. We build upon recent work in this line of research by introducing the idea where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved. In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm {\em for that instance}. We formalize this idea and derive rigorous sample complexity bounds for this learning problem, in the spirit of recent work in data-driven algorithm design. We then apply this approach to the problem of making good decisions in the branch-and-cut framework for mixed-integer optimization (e.g., which cut to add?). In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance. Our computational results provide evidence that our particular way of using neural networks for cut selection can make a significant impact in reducing branch-and-cut tree sizes, compared to previous data-driven approaches.