On The Global Convergence of Randomized Coordinate Gradient Descent for Non-Convex Optimization
Time: 2024-01-03
Published By: Xiaoni Tan
Speaker(s): Zi'ang Chen(MIT)
Time: 10:00-11:00 January 5, 2024
Venue: Room 9, Quan Zhai, BICMR
Abstract: We analyze the global convergence property of coordinate gradient descent with random choice of coordinates and stepsizes for non-convex optimization problems. Under generic assumptions, we prove that the algorithm iterate will almost surely escape strict saddle points of the objective function. As a result, the algorithm is guaranteed to converge to local minima if all saddle points are strict. Our proof is based on viewing coordinate descent algorithm as a nonlinear random dynamical system and a quantitative finite block analysis of its linearization around saddle points. This is joint work with Yingzhou Li and Jianfeng Lu.