114:00 — PDE Constrained Optimization and Digital Twins

With recent advancements in computing resources and interdisciplinary collaborations, a new research field called Digital Twins (DTs) is starting to emerge. Data from sensors located on a physical system is fed into its DT, the DT in-turn help make decisions about the physical system. This cycle then continues for the life-time of the physical system. A typical example is a bridge or a human heart.

In many cases, these problems can be cast as PDE-constrained optimization (PDECO) problems. This talk begins by discussing the role of PDECO in DTs. All the aforementioned decisions must be made while accounting for the underlying uncertainties. In this vein, a risk-averse optimization based framework is considered. To overcome the high-dimensional costs (due to uncertainty), a tensor train decomposition framework is introduced.

A fundamental challenge for these problems is the ambiguity in the underlying distribution. The last part of the talk will introduce a “Rockafellian” framework for PDECO that is robust to inaccuracies in the precise form of the problem uncertainty. The framework is based on problem relaxation and involves optimizing a bivariate objective functional that features both a standard control variable and an additional perturbation variable that handles the distributional ambiguity.

214:30 — Semi-smoothness in Infinite Dimensional Non-Smooth Optimization

We consider local convergence of Proximal-Newton Methods in Hilbert spaces. Fast local convergence of the unregularized algorithm can be shown straightforwardly under a semi-smoothness assumption on the differentiable part of the objective. However, transition to fast local convergence of a globalized algorithm is surprisingly subtle and requires additional insight into the problem. In this talk we will elaborate on this topic, where non-smooth optimization and semi-smooth Newton meet.

315:00 — A numerical solution approach for non-smooth optimal control problems based on the Pontryagin maximum principle

Optimal control problems subject to partial differential equations appear in many applications.
In recent years, interest has shifted to non-smooth optimal control problems.
In this talk, we present an algorithm to numerically solve non-smooth and non-convex optimal control problems.
The cost functional is assumed to be of integral type with possibly non-smooth and non-convex integrands.
As state equation we choose semilinear elliptic or parabolic partial differential equations.
Then the method is applicable to a wide range of problems, including problems with integer-valued controls.
For problems with this structure the celebrated Pontryagin maximum principle is a necessary optimality condition,
which is has no analogue in finite-dimensional optimization.
The method we present is a gradient-like scheme that is based on the Pontryagin maximum principle,
and includes a Armijo-type linesearch procedure.
The descent direction is motivated by topological derivatives.
We discuss convergence properties of the method: weak limit points are stationary in the following sense.
If the state part of the functional is convex then the iterates converge to a global solution of the problem.
Otherwise the maximum principle is fulfilled for weak limit points up to an epsilon.
We emphasize that the method is applicable to problems, where existence of optimal controls cannot be proven.
In addition, we present results of numerical experiments.

415:30 — Interweaved first-order methods for PDE-constrained and bilevel optimisation

PDE-constrained optimisation problems, as well as bilevel optimisation problems—which cover nonsmooth PDE constraints, such as Bingham flow—have traditionally been solved with one of two approaches: (a) Newton-type methods applied to sufficiently smooth optimality conditions for a constrained problem formulation, or (b) through treating the inner problem or partial differential equation through its solution mapping. Unless derivative-free methods are applied, the latter generally involves calculating both the solution mapping and its derivative to a high precision on each step of the outer optimisation method.

Recently in bilevel optimisation research, especially as applied to machine learning, so-called single-loop approaches have been introduced, to reduce the computational cost of frequently solving the inner problem. On each step of the outer method, such overall methods only take a single step of a conventional optimisation method towards the solution inner problem, bridging the gap between the two aforementioned approaches. The same principle can be applied to PDE-constrained optimisation. As we have recently shown, significant performance improvements can be obtained by interweaving the steps of conventional iterative solvers (Jacobi, Gauss–Seidel, conjugate gradients) for both the PDE and its adjoint, with the steps of the optimisation method. Moreover, in this talk, we demonstrate how the adjoint equation in bilevel problems can also benefit from such interweaving with conventional linear system solvers.