Skip to content

Universal approximation theorem

In the mathematical theory of artificial neural networks, the universal approximation theorem states[1] that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of **R***n*, under mild assumptions on the activation function. The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.

One of the first versions of the theorem was proved by George Cybenko in 1989 for sigmoid activation functions.[2]

Kurt Hornik showed in 1991[3] that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear.

Although feed-forward networks with a single hidden layer are universal approximators, the width of such networks has to be exponentially large. In 2017 Lu et al.[4] proved universal approximation theorem for width-bounded deep neural networks. In particular, they showed that width-n+4 networks with ReLU activation functions can approximate any Lebesgue integrable function on n-dimensional input space with respect to $ L^{1} $distance if network depth is allowed to grow. They also showed the limited expressive power if the width is less than or equal to n. All Lebesgue integrable functions except for a zero measure set cannot be approximated by width-n ReLU networks.

Later Hanin improved the earlier result,[4] showing that ReLU networks with width n+1 is sufficient to approximate any continuous convex function of n-dimensional input variables.[5]

Universal approximation theorem and representational capacity and effective capacity

在chapter 5.2中提出了representational capacity 和 effective capacity的概念,这两个概念和universal approximation theorem非常密切;