Monday, February 20 2017, 3:00pm
On the Structure of Neural Networks
Philipp Grohs (Universität Wien)
Deep (convolutional) neural networks have recently led to several breakthrough results in practical feature extraction applications. While they have been a central subject of empirical studies during the last decade, a satisfactory conceptual and mathematical explanation for their impressive performance in a wide range of applications is still missing. In this talk we take a first step towards such an understanding. In a first part we will examine the structure of neural networks from an approximation-theoretic point of view and study the question of which target functions can be efficiently approximated by a neural network of fixed size. We find that in this respect neural networks are indeed provably superior to standard approximation methods. If time permits we will, in a second part, consider the specific structure of deep convolutional neural networks and the question of how many layers are needed in order to have most of the features of the input signal be contained in the feature vector generated by the network. This is joint work with H. Boelcskei, G. Kutyniok, P. Petersen and T. Wiatowski.
Location: Raum 008/SeMath, Pontdriesch 14-16, 52062 Aachen