116:20 — The Subspace Flatness Conjecture and Faster Integer Programming

In a seminal paper, Kannan and Lov\'asz (1988) considered a quantity $\mu_{KL}(\Lambda,K)$
which denotes the best volume-based lower bound on the \emph{covering radius} $\mu(\Lambda,K)$ of a convex
body $K$ with respect to a lattice $\Lambda$. Kannan and Lov\'asz proved that $\mu(\Lambda,K) \leq n \cdot \mu_{KL}(\Lambda,K)$ and the Subspace Flatness Conjecture by Dadush (2012) claims a $O(\log n)$ factor suffices, which would match the lower bound from the work of Kannan and Lov\'asz.
We settle this conjecture up to a constant in the exponent by proving that $\mu(\Lambda,K) \leq O(\log^{3}(n)) \cdot \mu_{KL} (\Lambda,K)$. Our proof is based on the Reverse Minkowski Theorem due to Regev and Stephens-Davidowitz (2017).
Following the work of Dadush (2012, 2019), we obtain a $(\log n)^{O(n)}$-time randomized algorithm to
solve integer programs in $n$ variables.
Another implication of our main result is a near-optimal \emph{flatness constant} of $O(n \log^{3}(n))$.

216:50 — Integer programs with nearly totally unimodular matrices: proximity

It is a notorious open question whether integer programs (IPs), with an integer coefficient matrix M whose subdeterminants are all bounded by a constant in absolute value, can be solved in polynomial time. We answer this question in the affirmative if we further require that, by removing a constant number of rows and columns from M, one obtains the transpose of a network matrix. We achieve our result in two main steps, the first related to the theory of IPs and the second related to graph minor theory. In this talk, we focus on the first part: We derive a new proximity result for the case where M is a general totally unimodular matrix and show how it can be used algorithmically in our context.

317:20 — Integrality Gaps for Random Integer Programs via Discrepancy

We prove new bounds on the additive gap between the value of a random integer program $max c^T x, Ax \leq b, x\in \{0,1\}^n$ with $m$ constraints and that of its linear programming relaxation for a wide range of distributions on $(A,b,c)$. We are motivated by the work of Dey, Dubey, and Molinaro (SODA '21), who gave a framework for relating the size of Branch-and-Bound (B\&B) trees to additive integrality gaps.
Dyer and Frieze (MOR '89) and Borst et al. (Mathematical Programming '22), respectively, showed that for certain random packing and Gaussian IPs, where the entries of $A,c$ are independently distributed according to either the uniform distribution on $[0,1]$ or the Gaussian distribution $N(0,1)$, the integrality gap is bounded by $O_m(log^2(n)/n)$ with probability at least $1−1/n−e{-Omega(m)}$. In this paper, we generalize these results to the case where the entries of A are uniformly distributed on an integer interval (e.g., entries in $\{−1,0,1\}$), and where the columns of A are distributed according to an isotropic logconcave distribution. Second, we substantially improve the success probability to $1−1/poly(n)$, compared to constant probability in prior works (depending on $m$). Leveraging the connection to Branch-and-Bound, our gap results imply that for these IPs B\&B trees have size $n^poly(m)$ with high probability (i.e., polynomial for fixed $m$), which significantly extends the class of IPs for which B\&B is known to be polynomial.
Our main technical contribution is a new linear discrepancy theorem for random matrices. Our theorem gives general conditions under which a target vector is equal to or very close to a $\{0,1\}$ combination of the columns of a random matrix $A$. The proof uses a Fourier analytic approach, building on work of Hoberg and Rothvoss (SODA '19) and Franks and Saks (RSA '20).