### 1 — 16:20 — Snapshot generation for POD models of semilinear parabolic optimal control problems based on a simplified Newton method

In PDE-constrained optimization, proper orthogonal decomposition (POD) provides a surrogate model of a (potentially expensive) PDE discretization on which optimization iterations are executed. Because POD models usually provide good approximation quality only locally, they have to be updated during optimization. Updating the POD model is usually expensive, however, and therefore often impossible in a model-predictive control (MPC) context. Thus, reduced models of mediocre quality might be accepted. We take the view of a simplified Newton method for solving semilinear evolution equations to derive an algorithm that can serve as an offline phase to produce a POD model. Approaches that build the POD model with impulse response snapshots can be regarded as the first Newton step in this context. In particular, POD models that are based on impulse response snapshots are extended by adding a second simplified Newton step. This procedure improves the approximation quality of the POD model significantly by introducing a moderate amount of extra computational costs during optimization or the MPC loop. We illustrate our findings with an example satisfying our assumptions.

### 2 — 16:50 — On the well-posedness of the Mortensen observer for a defocusing cubic wave equation

In this presentation the analytical background of nonlinear observers based on minimal energy estimation is discussed. Originally, such strategies were proposed for the reconstruction of the state of finite dimensional dynamical systems by means of a measured output where both the dynamics and the output are subject to white noise. Our work aims at lifting this concept to a class of partial differential equations featuring deterministic perturbations using the example of a wave equation with a cubic defocusing term in three space dimensions. In particular, we investigate local regularity of the corresponding value function and consider operator Riccati equations to characterize its second spatial derivative.

### 3 — 17:20 — Adaptive Randomized Sketching for Nonsmooth Dynamic Optimization

Dynamic optimization problems arise in many applications, such as optimal flow control, full waveform inversion, and medical imaging. Despite their ubiquity, such problems are plagued by significant computational challenges. For example, memory is often a limiting factor when determining if a problem is tractable, since the evaluation of derivatives requires the entire state trajectory. Many applications additionally employ nonsmooth regularizers such as the L1-norm or the total variation, as well as auxiliary constraints on the optimization variables. We introduce a novel trust-region algorithm for minimizing the sum of a smooth, nonconvex function and a nonsmooth, convex function that addresses these two challenges. Our algorithm employs randomized sketching to store a compressed version of the state trajectory for use in derivative computations. By allowing the trust-region algorithm to adaptively learn the rank of the state sketch, we arrive at a provably convergent method with near optimal memory requirements. We demonstrate the efficacy of our method on a few control problems in dynamic PDE-constrained optimization.