Personal Schedules Are Now Available

To create your personal schedule, sign in to your profile

I will discuss recent work with my collaborators on the design and analysis of stochastic algorithms for solving constrained optimization problems involving continuous objective and constraint functions that may be nonconvex. The algorithms can be viewed as extensions of the stochastic-gradient method from the unconstrained to the constrained setting. I will motivate this work through applications including physics-informed learning and fair learning, where in particular we have found benefits of using our Newton-based (i.e., sequential quadratic optimization (SQP) and interior-point) methods rather than penalty-based approaches. I will summarize the convergence guarantees offered by our methods, discuss computational challenges and how to overcome them, and present the results of numerical experiments.