Huawei's Second-Order Optimizer in Mindspore: THOR
Speaker(s): Mengyun Chen (Huawei)
Time: 10:00-11:00 March 12, 2021
Venue: Online
Huawei's Second-Order Optimizer in Mindspore: THOR
Abstract: It is well-known that second-order optimizer can accelerate the training of deep neural networks, however, the huge computation cost of second-order optimization makes it impractical to apply in real practice. In order to reduce the cost, many methods have been proposed to approximate a second-order matrix. Inspired by KFAC, we propose a novel Trace-based Hardware-driven layer-ORiented Natural Gradient Descent Computation method, called THOR, to make the second-order optimization applicable in the real application models. Specifically, we gradually increase the update interval and use the matrix trace to determine which blocks of Fisher Information Matrix (FIM) need to be updated. Moreover, by resorting the power of hardware, we have designed a Hardware-driven approximation method for computing FIM to achieve better performance. To demonstrate the effectiveness of THOR, we have conducted extensive experiments. The results show that training ResNet-50 on ImageNet with THOR only takes 66.7 minutes to achieve a top-1 accuracy of 75.9 % under an 8 Ascend 910 environment with MindSpore, a new deep learning computing framework. Moreover, with more computational resources, THOR can only take 2.7 minutes to 75.9 % with 256 Ascend 910.
Zoom ID: 692 168 942
华为MindSpore在HPC中的探索应用