How private is SGD+noise? -- A CLT approach via Gaussian Differential Privacy
Time: 2020-05-06
Published By: Qi Liu
Speaker(s): Weijie Su, Jinshuo Dong(University of Pennsylvania)
Time: 10:00-11:00 May 11, 2020
Venue: Online
It is important to understand the exact privacy guarantee provided by private algorithms developed in the past prosperous decade of differential privacy, especially noisy SGD. In particular, any underestimation of actual privacy means unnecessary noise in the algorithm and loss in the final accuracy. We observe a central limit behavior in iterative private algorithms, which demonstrates the limit of the common $(\varepsilon,\delta)$ parametrization in most application scenarios including deep learning. For the rescue, a new notion called Gaussian Differential Privacy (GDP) is proposed and a complete toolkit is developed. We carry out various experiments to show how much unnecessary loss of accuracy can be saved in deep learning applications. Based on joint work with Jinshuo Dong, Aaron Roth, Zhiqi Bu and Qi Long. ZOOM INFO:https://zoom.com.cn/j/66493401785?pwd=bWZ6ZXI1ZEhxWHF6NS9CNzR0eG93Zz09 ID: 664 9340 1785 PIN:294560