How private is SGD+noise? -- A CLT approach via Gaussian Differential Privacy
发布时间:2020年05月06日
浏览次数:6004
发布者: Qi Liu
主讲人: Weijie Su, Jinshuo Dong(University of Pennsylvania)
活动时间: 从 2020-05-11 10:00 到 11:00
场地: 线上
It is important to understand the exact privacy guarantee provided by private algorithms developed in the past prosperous decade of differential privacy, especially noisy SGD. In particular, any underestimation of actual privacy means unnecessary noise in the algorithm and loss in the final accuracy. We observe a central limit behavior in iterative private algorithms, which demonstrates the limit of the common $(\varepsilon,\delta)$ parametrization in most application scenarios including deep learning. For the rescue, a new notion called Gaussian Differential Privacy (GDP) is proposed and a complete toolkit is developed. We carry out various experiments to show how much unnecessary loss of accuracy can be saved in deep learning applications. Based on joint work with Jinshuo Dong, Aaron Roth, Zhiqi Bu and Qi Long. ZOOM INFO:https://zoom.com.cn/j/66493401785?pwd=bWZ6ZXI1ZEhxWHF6NS9CNzR0eG93Zz09 ID: 664 9340 1785 PIN:294560