Quicklists
Javascript must be enabled

Haizhao Yang : Approximation theory and regularization for deep learning (Feb 20, 2019 11:55 AM)

This talk introduces new approximation theories for deep learning in parallel computing and high dimensional problems. We will explain the power of function composition in deep neural networks and characterize the approximation capacity of shallow and deep neural networks for various functions on a high-dimensional compact domain. Combining parallel computing, our analysis leads to an important point of view, which was not paid attention to in the literature of approximation theory, for choosing network architectures, especially for large-scale deep learning training in parallel computing: deep is good but too deep might be less attractive. Our analysis also inspires a new regularization method that achieves state-of-the-art performance in most kinds of network architectures.

Please select playlist name from following

Report Video

Please select the category that most closely reflects your concern about the video, so that we can review it and determine whether it violates our Community Guidelines or isn’t appropriate for all viewers. Abusing this feature is also a violation of the Community Guidelines, so don’t do it.

0 Comments

Comments Disabled For This Video