From Quasi-Newton to Extrapolation: Black-box, Memory Efficient and Line-search Free Acceleration via Stabilized Anderson Acceleration
发布时间:2019年06月10日
浏览次数:6495
发布者: Xiaoni Tan
主讲人: Junzi Zhang (Stanford University)
活动时间: 从 2019-06-14 09:30 到 10:30
场地: Room 29, Quan Zhai, BICMR
Anderson Acceleration (AA) is a classical acceleration approach for solving nonlinear equations and fixed-point problems arising in computational quantum chemistry and material sciences. It is recently observed to be a generalization and an efficient alternative to traditional quasi-Newton (QN) methods, and has been applied to optimization and decision-making problems with empirical success. However, as with most other acceleration techniques, it suffers from instability. To resolve this issue, we propose generic stabilization techniques for AA based on regularization, restart and safeguard. As a result, we obtain a black-box, memory efficient and line-search free acceleration scheme, with global convergence in theory and successful applications in practice. We illustrate the power of stabilized AA via several solvers/algorithms we produced recently, including SCS v2 (for conic programs) and AA-iPALM (for non-convex statistical learning).