108:30 — Convex Network Flows

We introduce a general framework for flow problems over hypergraphs. In our problem formulation, which we call the convex flow problem, we have a concave utility function for the net flow at every node and a concave utility function for each edge flow. The objective is to minimize the sum of these utilities, subject to constraints on the flows allowed at each edge, which we only assume to be a convex set. This framework not only includes many classic problems in network optimization, such as max flow, min-cost flow, and multi-commodity flows, but also generalizes these problems to allow, for example, concave edge gain functions. In addition, our framework includes applications spanning a number of fields: optimal power flow over lossy networks, routing and resource allocation in ad-hoc wireless networks, Arrow-Debreu Nash bargaining, and order routing through financial exchanges, among others. We show that the convex flow problem has a dual with a number of interesting interpretations, and that this dual decomposes over the edges of the hypergraph. Using this decomposition, we propose a fast solution algorithm that parallelizes over the edges and admits a clean problem interface. We provide an open source implementation of this algorithm in the Julia programming language, which we show is significantly faster than the state-of-the-art commercial convex solver Mosek on the optimal power flow problem and the decentralized exchange order routing problem.

209:00 — ** CANCELLED ** A full splitting algorithm for fractional programs with structured numerators and denominators

In this paper, we
consider a class of nonconvex and nonsmooth fractional programming problems, which involve the sum of a convex, possibly nonsmooth function composed with a linear operator and a differentiable, possibly nonconvex function in the numerator and a convex, possibly nonsmooth function composed with a linear operator in the denominator. These problems have applications in various fields, including CT reconstruction and sparse signal recovery. We propose an adaptive full-splitting proximal subgradient algorithm with an extrapolated step that addresses the challenge of evaluating the composition in the numerator by decoupling the linear operator from the nonsmooth component. We specifically evaluate the nonsmooth function using its proximal operator, while the linear operator is assessed through forward evaluations. Furthermore, the smooth component in the numerator is evaluated through its gradient, the nonsmooth component in the denominator is managed using its subgradient, and the linear operator in the denominator is also assessed through forward evaluations. We demonstrate subsequential convergence toward an approximate lifted stationary point and ensure global convergence under the Kurdyka-\L ojasiewicz property, all achieved {\it without relying on any full-row rank assumptions regarding the linear operators}. We further explain the reasoning behind aiming for an approximate lifted stationary point. This is exemplified by constructing a scenario illustrating that the algorithm could diverge when seeking exact solutions. Lastly, we present a practical iteration of the algorithm incorporating a nonmonotone line search, significantly enhancing its convergence performance. Our theoretical findings are validated through simulations involving limited-angle CT reconstruction and the robust sharp ratio minimization problem.

309:30 — O-minimal structures and central paths in nonlinear optimization

A central path lies at the heart of interior point methods for nonlinear optimization. We prove quantitative results about central paths in nonlinear optimization with definable functions in an o-minimal structure over $\mathbb{R}$. Using results from singularity theory and o-minimal geometry, we establish conditions for the existence, convergence, and also smoothness of a central path at the limit point.

410:00 — On describing convex hull of quadratic sets via aggregations

Obtaining efficient and concise description for convex hull of sets defined by quadratic inequalities is a fundamental task in optimization, often appearing as a subroutine (e.g. trust region subproblem) of some larger problem. A particular case is when the convex hull can be obtained by taking certain nonnegative linear combinations of the defining constraints, which we call aggregations.
For arbitrary number of (open) quadratic constraints, we propose the condition of hidden hyperplane convexity (HHC) which is sufficient for special aggregations to describe the convex hull. Under a further condition we show that the number of aggregations needed in the description is at most quadratic in the number of constraints. This answers a question asked by Dey, Munoz and Serrano.
We also discuss a condition under which our results generalizes to closed constraints. We do not need the assumption that all connected components are full dimensional, which was needed in the work by Modaresi and Vielma.