On the Model-misspecification of Reinforcement Learning
Time: 2023-07-18
Published By: Xiaoni Tan
Speaker(s): Lin Yang(UCLA)
Time: 16:00-17:00 July 19, 2023
Venue: Room 78201, Jingchunyuan 78, BICMR
Abstract: The success of reinforcement learning (RL) heavily depends on the approximation of functions such as policy, value, or models. Misspecification—a mismatch between the ground-truth and the best function approximators—often occurs, particularly when the ground-truth is complex. Because the misspecification error does not disappear even with an infinite number of samples, it's crucial to design algorithms that demonstrate robustness under misspecification. In this talk, we will first present a lower bound illustrating that RL can be inefficient (e.g., possessing exponentially large complexity) if the features can only represent the optimal value functions approximately but withhigh precision. Subsequently, we will show that this issue can be mitigated by approximating the transition probabilities. In such a setting, we will demonstrate that both policy-based and value-based approaches can be resilient to model misspecifications. Specifically, we will show that these methods can maintain accuracy even under large, locally-bounded misspecification errors. Here, the function class might have a \Omega(1) approximation error in specific states and actions, but it remains small on average under a policy-induced state-distribution. Such robustness to model misspecification partially explains why practical algorithms perform so well, paving the way for new directions in understanding model misspecifications.
Bio: Dr. Lin Yang is an Assistant Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. His current research focuses on the theory and applications of reinforcement learning. Previously, he served as a postdoctoral researcher at Princeton University. He earned two Ph.D. degrees in Computer Science and in Physics & Astronomy from Johns Hopkins University. Prior to that, he obtained a Bachelor’s degree in Math & Physics from Tsinghua University. Dr. Yang has numerous publications in premier machine learning venues like ICML and NeurIPS, and has served as area chairs for these conferences. His receives an Amazon Faculty Award, a Simons-Berkeley Research Fellowship, the JHU MINDS Best Dissertation Award, and the JHU Dean Robert H. RoyFellowship."