Personal Schedules Are Now Available

To create your personal schedule, sign in to your profile

Geometry arises in a myriad ways across the sciences, and quite naturally within AI and optimization too. I'd like to share with you examples where geometry helps us understand problems in machine learning and optimization. Time permitting, I'd like to also mention new results in geometric sampling, e.g., when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning. Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand important ideas better (e.g., eigenvectors, LLM training, etc.).

Ultimately, my hope is to offer the audience insights into geometric thinking, and to share with them some new tools that can help us make progress on modeling, algorithms, and applications. To make my discussion concrete, I will recall a few foundational results arising from our research, provide several examples, and note some open problems.