A stochastic semismooth Newton method for nonconvex nonsmooth optimization
Time: 2018-05-07
Published By: Kangkang Deng
Speaker(s): Dr. Andre Milzarek (BICMR)
Time: 10:00-11:30 May 10, 2018
Venue: Room 29, Quan Zhai, BICMR
In this talk, I present a globalized semismooth Newton method for solving stochastic optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. The resulting class of problems that can be solved within the proposed framework comprises a large variety of applications such as l1-logistic regression, structured dictionary learning, and other minimization problems arising in machine learning, statistics, or image processing. In the first part of my talk, I will introduce concepts from nonsmooth analysis, semismoothness, and the general semismooth Newton method for deterministic problems. In the second part, I will then show how these methodologies can be extended to the stochastic setting. Specifically, I will prove that the proposed stochastic Newton-type approach converges globally to stationary points in expectation and almost surely. Moreover, under standard assumptions, the method can be shown to locally turn into a pure semismooth Newton method and fast local convergence can be established with high probability.