Personal Schedules Are Now Available

To create your personal schedule, sign in to your profile

We investigate the use of high-order polynomial local models within adaptive regularization frameworks for solving nonconvex optimization problems. Though tensor-based models and methods have preoccupied optimization researchers in the past, this topic is experiencing a recent resurgence, possibly originating with the development of carefully-regularized variants of Newton’s method. It was shown, in both the convex and nonconvex worlds, that regularization algorithms allow higher-than-second derivatives to be naturally incorporated into their local model construction, while controlling convergence by means of a regularization parameter present in these models. Furthermore, the global rate of convergence/worst-case evaluation complexity of the ensuing methods proved to be optimal within their respective algorithmic classes that use corresponding derivative orders or their approximations. We review these developments, as well as establish connections to polynomial optimization and related techniques, in our efforts to efficiently minimise polynomial subproblems, to local or global optimality. A robust adaptive procedure for the choice of the regularization parameter is key to numerical performance, and we detail some of the intricacies involved in its design. Finally, we bring these ingredients together, in our quest to develop practical tensor methods, with exact and inexact derivatives, subproblem solutions and adaptive parameters, and ask the challenging question: are tensor methods preferable to second-order ones in numerical practice?