A stochastic semismooth Newton method for nonconvex nonsmooth optimization
发布时间:2018年05月07日
浏览次数:6665
发布者: Kangkang Deng
主讲人: Dr. Andre Milzarek (BICMR)
活动时间: 从 2018-05-10 10:00 到 11:30
场地: Room 29, Quan Zhai, BICMR
In this talk, I present a globalized semismooth Newton method for solving stochastic optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. The resulting class of problems that can be solved within the proposed framework comprises a large variety of applications such as l1-logistic regression, structured dictionary learning, and other minimization problems arising in machine learning, statistics, or image processing. In the first part of my talk, I will introduce concepts from nonsmooth analysis, semismoothness, and the general semismooth Newton method for deterministic problems. In the second part, I will then show how these methodologies can be extended to the stochastic setting. Specifically, I will prove that the proposed stochastic Newton-type approach converges globally to stationary points in expectation and almost surely. Moreover, under standard assumptions, the method can be shown to locally turn into a pure semismooth Newton method and fast local convergence can be established with high probability.