108:30 — An Adaptive Block Proximal ADMM for Weakly Convex, Linearly-Constrained Composite Functions

This work presents an adaptive, cyclic block proximal version of the alternating direction method of multipliers (AP-ADMM) for solving linearly-constrained, weakly convex, composite optimization problems. This method is adaptive to all problem parameters, including smoothness and weak convexity constants. We assume that the smooth component of the objective is weakly convex and possibly nonseparable, while the non-smooth component is convex, block-separable, and has easily computable block proximal maps. Each iteration of AP-ADMM consists of: (1) sequentially solving the sequence of all block proximal subproblems, and (2) performing full Lagrange multiplier and/or penalty parameter updates based on a novel update criterion. Without any rank assumptions on the constraint matrices, it is shown that AP-ADMM obtains an approximate first-order stationary point of the constrained problem in $O(\epsilon^{-3})$ iterations.

209:00 — Manifolds of solutions in deterministic global optimization

Continuous symmetries and submanifolds of solutions frequently pose problems for deterministic global optimization methods like branch-and-bound or branch-and-reduce. Usually, this leads to excessive box splitting in the vicinity of the solutions and exponential running times. We will develop verified computing methods for symmetries based on compact and non-compact matrix Lie groups and their application in the branch-and-bound framework. We furthermore present some verified continuation techniques that lead to inclusion/exclusion areas around solution manifolds.

309:30 — Difference-of-Convex Piecewise Linear Delta-Approximation and Its Application in Global Optimization

Using piecewise linear functions to approximate nonlinear relationships is a widely used strategy in engineering practice. Piecewise linear approximation has also been shown to be effective in solving challenging optimization problems. In this work, we consider the problem of finding continuous piecewise-linear (CPWL) approximation of any multivariable continuous function over any compact set for any predefined error-tolerance δ while keeping the number of polytopes low. We introduce the difference-of-convex (DC) CPWL representation that represents any CPWL function as the difference of two convex CPWL functions. A major advantage of the DC-CPWL representation is that it allows polytopes of any shape, so it is more efficient in expressing CPWL functions than representations that are limited to rectangular and triangular polytopes. We develop a finitely convergent algorithm to find a DC-CPWL δ-approximator. The computational bottleneck of the algorithm is the solution of mixed-integer linear programming (MILP) fitting problems with growing sizes. To alleviate computational burden, we also develop novel algorithms that solve a sequence of linear programming (LP) subproblems instead of MILP subproblems. The advantages of the proposed DC-CPWL δ-approximation methods are demonstrated through computational studies.

A piecewise linear (PWL) δ-approximator can be trivially modified into a δ-underestimator, which leads to tight cutting planes that can be used to accelerate branch-and-bound based global optimization. We consider nonconvex operational optimization problems that include complicating functions. These problems are to be solved repeatedly in the real-time within limited time windows. We first generate the PWL δ-underestimators of the complicating functions “offline”. Then at the root node of a branch-and-bound based solution procedure, we generate a pool of valid cutting planes from the PWL δ-underestimators of the complicating functions. At each subsequent node, one or multiple cutting planes in the pool may be added to the problem, depending on how effective they can tighten the convex relaxation problem. We will demonstrate the advantages of the proposed solution method through preliminary computational results.

410:00 — ** CANCELLED ** Location of optima and vertex optimality

We present new necessary conditions for the existence of a global minimizer of a nonlinear function on a polyhedron in the relative interior of one of its faces.
As a consequence, we obtain an extension to nonlinear and to quadratic programming of the classical fundamental theorem of linear programming which guarantees the existence of a global minimum at a vertex of the feasible region.
In the quadratic case we also provide a simple characterization of the class of those functions whose restriction to each face of a polyhedron attains its minimum at a vertex of such face.