From Quasi-Newton to Extrapolation: Black-box, Memory Efficient and Line-search Free Acceleration via Stabilized Anderson Acceleration
Time: 2019-06-10
Published By: Xiaoni Tan
Speaker(s): Junzi Zhang (Stanford University)
Time: 09:30-10:30 June 14, 2019
Venue: Room 29, Quan Zhai, BICMR
Anderson Acceleration (AA) is a classical acceleration approach for solving nonlinear equations and fixed-point problems arising in computational quantum chemistry and material sciences. It is recently observed to be a generalization and an efficient alternative to traditional quasi-Newton (QN) methods, and has been applied to optimization and decision-making problems with empirical success. However, as with most other acceleration techniques, it suffers from instability. To resolve this issue, we propose generic stabilization techniques for AA based on regularization, restart and safeguard. As a result, we obtain a black-box, memory efficient and line-search free acceleration scheme, with global convergence in theory and successful applications in practice. We illustrate the power of stabilized AA via several solvers/algorithms we produced recently, including SCS v2 (for conic programs) and AA-iPALM (for non-convex statistical learning).