理论统计暑期学校
活动时间: 从 2023-07-03 09:00 到 2023-07-07 17:00
场地: 北京国际数学研究中心,镜春园82号甲乙丙楼报告厅
Four lecture courses by Yuxin Chen, Weijie Su, Yuting Wei and Yihong Wu.
Lecturer: Yuxin Chen (陈昱鑫)
Title: Spectral Methods for Data Science
Abstract: Spectral methods have emerged as a simple yet surprisingly effective approach for extracting information from massive, noisy and incomplete data. In a nutshell, spectral methods refer to a collection of algorithms built upon the eigenvalues (resp. singular values) and eigenvectors (resp. singular vectors) of some properly designed matrices constructed from data. A diverse array of applications have been found in machine learning and data science. Due to their simplicity and effectiveness, spectral methods are not only used as a stand-alone estimator, but also frequently employed to initialize other more sophisticated algorithms to improve performance. While the studies of spectral methods can be traced back to classical matrix perturbation theory and methods of moments, the past decade has witnessed tremendous theoretical advances in demystifying their efficacy through the lens of statistical modeling, with the aid of non-asymptotic random matrix theory. This course aims to present a systematic, comprehensive, yet accessible introduction to spectral methods from a modern statistical perspective, highlighting their algorithmic implications in diverse large-scale applications. In particular, our exposition gravitates around several central questions that span various applications: how to characterize the sample efficiency of spectral methods in reaching a target level of statistical accuracy, and how to assess their stability in the face of random noise, missing data, and adversarial corruptions? In addition to conventional L2 perturbation analysis, we present fine-grained perturbation theory for eigenspace and singular subspaces, enabled by a powerful "leave-one-out" analysis framework. This course is based on the monograph arXiv:2012.08496.
Bio: Yuxin Chen is currently an Associate Professor of Statistics and Data Science and of Electrical and Systems Engineering at the University of Pennsylvania. Before joining UPenn, he was an Assistant Professor of Electrical and Computer Engineering at Princeton University, and was an associate faculty member in Computer Science and Applied and Computational Mathematics. He completed his Ph.D. in Electrical Engineering at Stanford University, and was also a postdoc scholar at Stanford Statistics. His current research interests include high-dimensional statistics, nonconvex optimization, reinforcement learning, and information theory. He has received the Alfred P. Sloan Research Fellowship, the ICCM best paper award (gold medal), the Princeton Graduate Mentoring Award, the Princeton SEAS junior faculty award, and was selected as a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization.
Lecturer: Weijie Su (苏炜杰)
Title: Some (Not So) Theoretical Questions About ChatGPT
Abstract: The emergence of generative AI technologies such as ChatGPT, Stable Diffusion, Copilot, and Midjourney will soon cause profound societal changes. While tech companies such as OpenAI, DeepMind, Google, and Meta have taken the lead in this AI race, the soon-to-be integration of these technologies into economic, commercial, social, and even political activities creates exciting opportunities for researchers in mathematics, statistics, economics, and theoretical computer science. In this course, we will explore several theoretical questions related to generative AI, including aligning AI values with those of diverse human groups, privacy and fairness concerns of AI models, tensions between AI models and content providers, and mechanisms for designing data exchange markets for training AI models. Through analyzing these issues, we will present preliminary results and brainstorm future developments.
This course requires no coding experience, and students only need a pencil and paper to engage in the class.
Bio: Weijie Su is an Associate Professor in the Wharton Statistics and Data Science Department and, by courtesy, in the Department of Computer and Information Science, at the University of Pennsylvania. He is a co-director of Penn Research in Machine Learning. Prior to joining Penn, he received his Ph.D. degree from Stanford University under the supervision of Emmanuel Candes in 2016 and his bachelor’s degree in mathematics from Peking University in 2011. His research interests span privacy-preserving data analysis, deep learning theory, optimization, mechanism design, and high-dimensional statistics. He serves as an associate editor of the Journal of the American Statistical Association (Theory and Methods). He is a recipient of the Stanford Theodore Anderson Dissertation Award in 2016, an NSF CAREER Award in 2019, an Alfred Sloan Research Fellowship in 2020, the SIAM Early Career Prize in Data Science in 2022, and the IMS Peter Gavin Hall Prize in 2022.
Lecturer: Yuting Wei (魏玉婷)
Title: Statistical and Algorithmic Foundations of Reinforcement Learning
Abstract: As a paradigm for sequential decision making in unknown environments, reinforcement learning (RL) has received a flurry of attention in recent years. However, the explosion of model complexity in emerging applications and the presence of nonconvexity exacerbate the challenge of achieving efficient RL in sample-starved situations, where data collection is expensive, time-consuming, or even high-stakes (e.g., in clinical trials, autonomous systems, and online advertising). How to understand and enhance the sample and computational efficiencies of RL algorithms is thus of great interest and in imminent need. In this tutorial, we aim to present a coherent framework that covers important algorithmic and theoretical developments in RL, highlighting the connections between new ideas and classical topics. Employing Markov Decision Processes as the central mathematical model, we start by introducing classical dynamic programming algorithms when precise descriptions of the environments are available. Equipped with this preliminary background, we introduce four distinctive RL scenarios (i.e., RL with a generative model, offline RL, online RL, and multi-agent RL), and present three mainstream RL paradigms (i.e., model-based approach, model-free approach, and policy optimization). Our discussions gravitate around the issues of sample complexity and computational efficiency, as well as algorithm-dependent and information-theoretic lower bounds in the non-asymptotic regime. We will systematically introduce several effective algorithmic ideas (e.g., stochastic approximation, variance reduction, optimism in the face of uncertainty for online RL, pessimism in the face of uncertainty for offline RL) that permeate the design of efficient RL.
Bio: Yuting Wei is currently an Assistant Professor in the Department of Statistics and Data Science at the Wharton School, University of Pennsylvania. Prior to that, Yuting spent two years at Carnegie Mellon University as an Assistant Professor of Statistics and Data Science, and one year at the Department of Statistics at Stanford University as a Stein Fellow. She received her Ph.D. in statistics at University of California, Berkeley under the supervision of Martin Wainwright and Aditya Guntuboyina, and her Bachelor of Science from Peking University with honors. She was the recipient of the 2022 NSF CAREER Award, honorable mention for the 2023 Bernoulli Society's New Researcher Award, and the 2018 Erich L. Lehmann Citation from the Berkeley statistics department for her Ph.D. dissertation in theoretical statistics. Her research interests include high-dimensional statistics, non-parametric statistics, statistical machine learning, and reinforcement learning.
Lecturer: Yihong Wu (吴毅弘)
Title: Recent results in planted assignment problems
Abstract: Motivated by applications such as particle tracking, network de-anonymization, and computer vision, a recent thread of research is devoted to statistical models of assignment problems, in which the data are random (weighted) graphs correlated with the latent permutation. In contrast to problems such as the planted clique or stochastic block model, the major difference here is the lack of low-rank structures, which brings forth new challenges in both statistical analysis and algorithm design. In this short course, we will discuss a couple of prototypical problems in this research thread, namely, the planted bipartite matching problem (linear assignment) and the graph matching problem (quadratic assignment). We will focus on both the information-theoretic and algorithmic aspects of these problems, introducing the needed mathematical machinery along the way and discussing various open problems.
Bio: Yihong Wu is a Professor in the Department of Statistics and Data Science at Yale University. He received his B.E. degree from Tsinghua University in 2006 and Ph.D. degree from Princeton University in 2011. He was a postdoctoral fellow with the Statistics Department in The Wharton School at the University of Pennsylvania from 2011 to 2012 and an assistant professor in the Department of ECE at the University of Illinois at Urbana-Champaign from 2013 to 2015. His research interests are in the theoretical and algorithmic aspects of high-dimensional statistics, information theory, and optimization. He was a recipient of the NSF CAREER award in 2017 and the Sloan Research Fellowship in Mathematics in 2018.