Zusätzlicher Vortrag im Rahmen des Mathematischen Kolloquiums


Am Montag, 20. Februar 2017 hält Herr Philipp Grohs (Universität Wien) einen zusätzlichen Vortrag im Rahmen des Mathematischen Kolloquiums.

On the Structure of Neural Networks

Deep (convolutional) neural networks have recently led to several breakthrough results in practical feature extraction applications. While they have been a central subject of empirical studies during the last decade, a satisfactory conceptual and mathematical explanation for their impressive performance in a wide range of applications is still missing. In this talk we take a first step towards such an understanding. In a first part we will examine the structure of neural networks from an approximation-theoretic point of view and study the question of which target functions can be efficiently approximated by a neural network of fixed size. We find that in this respect neural networks are indeed provably superior to standard approximation methods. If time permits we will, in a second part, consider the specific structure of deep convolutional neural networks and the question of how many layers are needed in order to have most of the features of the input signal be contained in the feature vector generated by the network. This is joint work with H. Boelcskei, G. Kutyniok, P. Petersen and T. Wiatowski.

Der Vortrag beginnt um 15.00 Uhr in Raum 008 ( SeMath, Pontdriesch 14 - 16).

Im Anschluss wird zu Kaffee/Tee eingeladen.

Alle Interessierten sind herzlich willkommen.