1 — 14:00 — Weighted graph burning: Complexity and MIP formulations
Graph burning is a discrete-time process mimicking the spread of fire in graphs and was introduced to model the spread of influence in networks. In every step existing fires spread to neighbouring vertices and a new vertex is set ablaze. Its associated graph parameter, the burning number, denotes the minimal number of steps needed to fully burn a graph.
We introduce a weighted variant of graph burning, where laying a fire carries a vertex dependent cost. The aim is to burn a graph within a bounded number of steps for minimal cost. We focus on a special case that amounts to only laying fire to vertices from a preselected set and show that it remains NP-hard even when restricted to path graphs.
Further, we introduce a neighbourhood based MIP-formulation for graph burning and adopt it to weighted graph burning. Preliminary computational experiments suggest that the new formulation outperforms earlier formulations by a magnitude.
2 — 14:30 — Computing cuts in graphs: Homory-Hu tree and the Sparsest Cut problem
I will consider two classical problems in combinatorial optimization for undirected weighted graphs. The first one is to compute a Gomory-Hu tree, which is a data structure representing minimum s-t cuts for all pairs s,t. I will present a new algorithmic approach based on a reduction to the problem that I call "OrderedCuts". I will discuss its theoretical properties and show some experimental results, which indicate that the new approach is among current state-of-the-art GH tree construction algorithms.
The second problem is to compute the Sparsest Cut, i.e. cut (S,T) that minimizes the ratio cost(S,T) / min(|S|,|T|). The best known algorithm for this NP-hard problem is due to [Sherman'09], which computes $O(\sqrt{(\log n)/\varepsilon})$-approximation using $O(n^\varepsilon * polylog(n))$ maxflow computations. I will present an alternative approach which simplifies Sherman's algorithm and also allows parallelization: it gives the same approximation using $O(polylog(n))$ maxflows on $O(n^\varepsilon)$ processors.
3 — 15:00 — On the Polynomial Solvability of the Two-Stage Robust Perfect b-Matching Problem
The perfect b-matching problem is a well-known variant of the famous matching problem with many applications. Given an undirected graph, each vertex v can be matched exactly b(v) times and edges can be used multiple times. The problem can be solved in polynomial time. In theory, the capacity b(v) is fixed and known for each vertex, but in practice, the capacity is often subject to uncertainties. In this talk, we present a Two-Stage variant of the perfect b-matching problem under uncertain capacity ensuring robustness, called the directed robust (perfect) b-matching problem.
Here, we consider a directed grapg. In the first stage, for each vertex, we set a pre-matching which fixes the value of the matching over all outgoing arcs. Then, depending on the pre-matching, the worst-case scenario is chosen from a given set of scenarios. Finally, a perfect matching is set in the second stage minimizing the costs while respecting the pre-matching as well as the capacities given by the scenario.
The problem turns out to be weakly NP-hard even on bipartite graphs. However, we show that it can be solved in polynomial time on graphs without directed alternating cycle.
4 — 15:30 — Spectral sparsification of hypergraphs
Spectral sparsification is a technique to approximate dense objects (e.g., weighted graphs and matrices) with sparse objects while preserving the original spectral information (e.g., eigenvalues and energy functions). This concept has been the cornerstone for developing faster algorithms for continuous/combinatorial optimization and numerical linear algebra. In this talk, I will discuss recent results of spectral sparsification for hypergraphs. Since hypergraph energy functions are no longer quadratic, spectral sparsification presents additional challenges compared to the existing spectral sparsification and requires more advanced tools. I will present recent advances in simple edge-sampling sparsification algorithms and the analysis.