Nonconvex ADMM: Convergence and Applications (by Yu Wang, Wotao Yin, and Jinshan Zeng)
主讲人: Prof. Wotao Yin, UCLA
活动时间: 从 2015-12-16 19:00 到 20:00
场地: Quan 9
ADMM has been surprising us with numerical success on many non-convex optimization problems, which include, but not limited to, the minimization of Lq quasi-norm, Schatten-q quasi-norm, (0<q<1), SCAD, bi-linear, and bi-convex functions, as well as those subject to normalization and matrix manifold constraints.
We study when ADMM converges on minimizing a nonconvex, possibly nonsmooth, objective function, f(x_1,…,x_p,y), subject to linear equality constraints that couple x_1,…,x_p,y, where p > 0 is an integer. Our ADMM sequentially updates the primal variables in the order x_1,…,x_p,y, followed by updating the dual variable. We separate the variable y from x_i's since y has a special role in our analysis. Our results provide sufficient conditions for this nonconvex ADMM to converge with two, three, or more blocks, as they are special cases of our model. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge.
ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). However, we present a simple nonconvex and no example to illustrate how ADMM converges but ALM (with a fixed penalty parameter) diverges.