The Star Geometry of Regularizer Learning
Speaker(s): Oscar Leong(UCLA)
Time: 10:30-11:30 July 31, 2024
Venue: Room 29, Quan Zhai, BICMR
Abstract:
Across many tasks in data science, it is necessary to estimate a signal from corrupted measurements. Perhaps the most pervasive and commonly used technique to address such problems is variational regularization. This consists of solving an optimization problem where one must minimize the sum of a data fidelity term and a regularizer, a penalty term chosen to encourage certain structure in solutions. While there is a suite of regularizers one could choose from, we currently lack a systematic understanding from a modeling perspective of what types of geometries should be preferred in a regularizer for a given data source. In particular, given a data distribution, what is the "optimal" regularizer for such data? Moreover, what aspects about the data govern whether the regularizer enjoys certain properties, such as convexity? Using ideas from star geometry, Brunn-Minkowski theory, and variational analysis, I show that we can characterize the optimal regularizer for a given distribution and establish conditions under which this optimal regularizer is convex. Moreover, I will discuss how our theory can be applied to recent deep learning-based regularization learning frameworks that incorporate additional measurement information into regularizers, which are especially useful in the context of inverse problems.
Bio:
Dr. Oscar Leong is an Assistant Professor in the Statistics and Data Science Department at UCLA. Until July 2024, he was a von Karman Instructor in the Computing and Mathematical Sciences Department at Caltech, hosted by Dr. Venkat Chandrasekaran. There, he also worked with Dr. Katherine L. Bouman and the Computational Cameras group. Dr. Leong received his PhD in Computational and Applied Mathematics from Rice University under the supervision of Dr. Paul Hand, where he was an NSF Graduate Fellow. Dr. Leong has been recognized as a Rising Star in Data Science by the University of Chicago and received the MGB-SIAM Early Career Fellowship in 2023. His research interests lie in the mathematics of data science, optimization, and machine learning, where he studies the theory and application of learning-based methods to solve inverse problems. He is broadly interested in using tools from convex and star geometry, high-dimensional statistics, and nonlinear optimization to better understand and improve data-driven, decision-making algorithms.