The Kolmogorov–Arnold representation theorem revisited☆
Keywords
1. Introduction
Why are additional hidden layers in a neural network helpful? The Kolmogorov–Arnold representation (KA representation in the following) seems to offer an answer to this question as it shows that every continuous function can be represented by a specific network with two hidden layers (Hecht-Nielsen, 1987). But this interpretation has been highly disputed. Articles discussing the connection between both concepts have titles such as “Representation properties of networks: Kolmogorov’s theorem is irrelevant” (Girosi & Poggio, 1989) and “Kolmogorov’s theorem is relevant” (Kurkova, 1991).
The original version of the KA representation theorem states that for any continuous function , there exist univariate continuous functions , such that (1.1)This means that the univariate functions and are enough for an exact representation of a -variate function. Kolmogorov published the result in 1957 disproving the statement of Hilbert’s 13th problem that is concerned with the solution of algebraic equations. The earliest proposals in the literature introducing multiple layers in neural networks date back to the sixties and the link between KA representation and multilayer neural networks occurred much later.
A ridge function is a function of the form , with vectors and univariate functions . The structure of the KA representation can therefore be viewed as the composition of two ridge functions. There exists no exact representation of continuous functions by ridge functions and matching upper and lower bounds for the best approximations are known (Gordon et al., 2002, Maiorov, 1999, Maiorov et al., 1999). The composition structure is thus essential for the KA representation. A two-hidden-layer feedforward neural network with activation function , hidden layers of width and , and one output unit can be written in the form Because of the similarity between the KA representation and neural networks, the argument above suggests that additional hidden layers can lead to unexpected features of neural network functions.
There are several reasons why the Kolmogorov–Arnold representation theorem has been initially declared as irrelevant for neural networks in Girosi and Poggio (1989). The original proof of the KA representation in Kolmogorov (1957) and some later versions are non-constructive providing very little insight on how the function representation works. Although the are continuous, they are still rough functions sharing similarities with the Cantor function. Meanwhile more refined KA representation theorems have been derived strengthening the connection to neural networks (Braun and Griebel, 2009, Sprecher, 1965, Sprecher, 1996, Sprecher, 1997). Maiorov and Pinkus (1999) showed that the KA representation can essentially be rewritten in the form of a two-hidden-layer neural network for a non-computable activation function , see the literature review in Section 4 for more details. The following KA representation is much more explicit and practical.
Theorem 1 Fix . There are real numbers and a continuous and monotone function , such that for any continuous function , there exists a continuous function with Theorem 2.14 in Braun, 2009
This representation is based on translations of one inner function and one outer function . The inner function is independent of . The dependence on in the first layer comes through the shifts . The right hand side can be realized by a neural network with two hidden layers. The first hidden layer has units and activation function and the second hidden layer consists of units with activation function .
For a given , we will assume that the represented function is -smooth, which here means that there exists a constant , such that for all . Let be arbitrary. To approximate a -smooth function up to an error , it is well-known that standard approximation schemes need at least of the order of parameters. This means that any efficient neural network construction mimicking the KA representation and approximating -smooth functions up to error should have at most of the order of many network parameters.
Starting from the KA representation, the objective of the article is to derive a deep ReLU network construction that is optimal in terms of number of parameters. For that reason, we first present novel versions of the KA representation that are easy to prove and also allow to transfer smoothness from the multivariate function to the outer function. In Section 3 the link is made to deep ReLU networks.
The efficiency of the approximating neural network is also the main difference to the related work (Kurkova, 1992, Montanelli and Yang, 2020). Based on sigmoidal activation functions, the proof of Theorem 2 in Kurkova (1992) proposes a neural network construction based on the KA representation with two hidden layers and and hidden units to achieve approximation error of the order of . This means that more than network weights are necessary, which is sub-optimal in view of the argument above. The very recent work (Montanelli & Yang, 2020) uses a modern version of the KA representation that guarantees some smoothness of the interior function. Combined with the general result on function approximation by deep ReLU networks in Yarotsky (2018), a rate is derived that depends on the smoothness of the outer function via the function class , see p. 4 in Montanelli and Yang (2020) for a definition. The non-trivial dependence of the outer function on the represented function makes it difficult to derive explicit expressions for the approximation rate if is -smooth. Moreover, as the KA representation only guarantees low regularity of the interior function, it remains unclear whether optimal approximation rates can be obtained.
Although the deep ReLU network proposed in Shen, Yang, and Zhang (2020) is not motivated by the KA representation or space-filling curves, the network construction is quite similar. Section 4 contains a more detailed comparison with this and other related approaches.
2. New versions of the KA representation
The starting point of our work is the apparent connection between the KA representation and space-filling curves (Gorchakov and Mozolenko, 2019, Sprecher and Draghici, 2002). A space-filling curve is a surjective map . This means that it hits every point in and thus “fills” the cube . Known constructions are based on iterative procedures producing fractal-type shapes. If exists, we could then rewrite any function in the form (2.1)This would decompose the function into a function that can be chosen to be independent of and a univariate function containing all the information of the -variate function . Compared to the KA representation, there are two differences. Firstly, the interior function is-variate and not univariate. Secondly, by Netto’s theorem (Kupers), a continuous surjective map cannot be injective and does not exist. The argument above can therefore not be made precise for arbitrary dimension and a continuous space-filling curve .
To illustrate our approach, we first derive a simple KA representation based on (2.1) and with an additive function. The identity avoids the continuity of the functions and , which is the major technical obstacle in the proof of the KA representation. The proof does moreover not require that the represented function is continuous.
Lemma 1 Fix integers . There exists a monotone function such that for any function , we can find a function with (2.2)
Proof The B-adic representation of a number is not unique. For the decimal representation, is for instance the same as To avoid any problems that this may cause, we select for each real number one B-adic representation with . Throughout the following, it is often convenient to rewrite in its B-adic expansion. Set and define the function The function is monotone and maps to a number with -adic representation inserting always zeros between the original -adic digits of . Multiplication by shifts moreover the digits by places to the right. From that we obtain the -adic representation (2.3)Because we can recover from , the map is invertible. Denote the inverse by . We can now define and this proves the result. □
The proof provides some insights regarding the structure of the KA representation. Although one might find the construction of in the proof very artificial, a substantial amount of neighborhood information persists under . Indeed, points that are close are often mapped to nearby values. If for instance are two points coinciding in all components up to the th -adic digit, then, and coincide up to the -th -adic digit. In this sense, the KA representation can be viewed as a two step procedure, where the first step does some extreme dimension reduction. Compared to low-dimensional random embeddings which by the Johnson–Lindenstrauss lemma nearly preserve the Euclidean distances among points, there seems, however, to be no good general characterization of how the interior function changes distances.
The function is discontinuous at all points with finite -adic representation. The map defines moreover an order relation on via . For , the inverse map is often called the Morton order and coincides, up to a rotation of degrees, with the -curve in the theory of space-filling curves (Bader, 2013 Section 7.2).
If is a piecewise constant function on a dyadic grid, the outer function is also piecewise constant. As a negative result, we show that for this representation, smoothness of does not translate into smoothness on .
Lemma 2 Let be a positive integer. Consider representation (2.2) for and let be as in the proof of Lemma 1. If is piecewise constant on the hypercubes , with , then is a piecewise constant function on the intervals , . If , then is discontinuous.
Proof (i) If , we can write with . There exist thus , such that . Since , the result follows from . (ii) If is the identity, . For , we find that and for , . Even stronger, every point with finite binary representation is a point of discontinuity. □
The discontinuity of the space-filling map causes to be more irregular than . Many constructions of space-filling curves are known but to obtain a representation of KA type needs to be an additive function. The additivity condition rules out most of the canonical choices, such as for instance the Hilbert curve. Below, we use for the Lebesgue curve and show that this then leads to a representation that allows to transfer smoothness properties of to smoothness properties on and therefore overcomes the shortcomings of the representation in (2.2). In contrast to the earlier result, is now a function that maps from the Cantor set, in the following denoted by , to the real numbers.
Theorem 2 For fixed dimension , there exists a monotone function (the Cantor set) such that for any function , we can find a function such that (2.4) if is continuous, then also is continuous; if there exist and a constant , such that , for all , then, if , then, there exist sequences with and
Proof The construction of the interior function is similar as in the proof of Lemma 1. We associate with each one binary representation and define (2.5)The function multiplies the binary digits by two (thus, only the values and are possible) and then expresses the digits in a ternary expansion adding zeros between each two digits. By construction, the Cantor set consists of all that only have and as digits in the ternary expansion. This shows that . Define now (2.6)where the right hand side is written in the ternary system. Because we can recover the binary representation of , the map is invertible. Since for all and , the image of is contained in the Cantor set. We can now define the inverse by and set , proving . In the next step of the proof, we show that (2.7)For that we extend the proof in Bader (2013, p. 98). Observe that maps to the vector.Given arbitrary , denotes the integer for which . Suppose that the first ternary digits of and are not all the same and denote by the position of the first digit of that is not the same as . Since only the digits and are possible, the difference between and can be lower bounded by , where the term accounts for the effect of the later digits. Thus and this is a contradiction with and . Thus, the first ternary digits of and coincide. Using the explicit form of this also implies that and coincide in the first binary digits in each component. This means that and together with the definition of , we find (2.8)proving (2.7), since were arbitrary. Using again that , and follow. To prove , take and . Then, and . Rewriting this yields for all . □
Thus, by restricting to the Cantor set, one can overcome the limitations of Netto’s theorem mentioned at the beginning of the section. In fact by construction and (2.8), is a surjective, invertible and continuous space-filling curve. The previous theorem is in a sense more extreme than the KA representation as the univariate interior function maps to a set of Hausdorff dimension . Siegelmann and Sontag (1994) use a similar construction to prove embeddings of the function spaces generated by circuits into neural network function classes.
Representation (2.4) has the advantage that smoothness imposed on translates into smoothness properties on . The reason is that the function associates to each one value in the Cantor set such that values that are far from each other in are not mapped to nearby values in the Cantor set. Based on this value in the Cantor set, the outer function reconstructs the function value . Since the distance of the values in is linked to the distance of the values in the Cantor set, local variability in the function does not lead to arbitrarily large fluctuations of the outer function . Therefore smoothness imposed on translates into smoothness properties on .
A natural question is whether we gain or lose something if instead of approximating directly, we use (2.4) and approximate . Recall that the approximation rate should be if is the number of free parameters of the approximating function, the smoothness and the dimension. Since is by -smooth with and is defined on a set with Hausdorff dimension , we see that there is no loss in terms of approximation rates since . Thus, we can reduce multivariate function approximation to univariate function approximation on the Cantor set. This, however, only holds for . Indeed, the last statement of the previous theorem means that for the smooth function , the outer function is not more than -smooth, implying that for higher order smoothness, there seems to be a discrepancy between the multivariate and univariate function approximation.
The only direct drawback of (2.4) compared to the traditional KA representation is that the interior function is discontinuous. We will see in Section 3 that can, however, be well approximated by a deep neural network.
It is also of interest to study the function class containing all that are generated by the representation in (2.4) for -smooth outer function . Observe that if , then coincides with the interior function which is discontinuous. This shows that for , the class of all of the form (2.4) with a -smooth function on the Cantor set is strictly larger than the class of -smooth functions. Interestingly, the function class with Lipschitz continuous outer function contains all functions that are piecewise constant on a dyadic partition of .
Lemma 3 Consider representation (2.4) and let be a positive integer. If is piecewise constant on the hypercubes , with , then is a Lipschitz function with Lipschitz constant bounded by .
Proof Let and be the same as in the proof of Theorem 2. For any vector define . There exist integers such that . Since is constant on these dyadic hypercubes, const. If and , then, arguing as in the proof of Theorem 2, we find that whenever and . Therefore, we have if and if and . Since were arbitrary, the result follows. □
It is important to realize that the space-filling curves and fractal shapes occur because of the exact identity. It is natural to wonder whether the KA representation leads to an interesting approximation theory. For that, one wants to truncate the number of digits in (2.5), hence reducing the complexity of the interior function. We obtain an approximation bound that only depends on the dimension through the smoothness of the outer function .
Lemma 4 Let and suppose is a positive integer. For , define . If there exist and a constant , such that , for all , then, we can find a univariate function , such that , for all , and Moreover, .
Proof From the geometric sum formula, . Let and be as in Theorem 2 and as defined in the statement of the lemma. Since , we then have that , for all . Moreover, (2.6) shows that and are both in the Cantor set and have the same first ternary digits. Thus, using (2.4), we find Finally, follows as an immediate consequence of the function representation (2.4). □
3. Deep ReLU networks and the KA representation
This section studies the construction of deep ReLU networks imitating the KA approximation in Lemma 4. A deep/multilayer feedforward neural network is a function that can be represented by an acyclic graph with vertices arranged in a finite number of layers. The first layer is called the input layer, the last layer is the output layer and the layers in between are called hidden layers. We say that a deep network has architecture , if the number of hidden layers is , and , and are the number of vertices in the input layer, th hidden layer and output layer, respectively. The input layer of vertices represents the input . For all other layers, each vertex stands for an operation of the form with the output (viewed as vector) of the previous layer, a weight vector, a shift parameter and the activation function. Each vertex has its own set of parameters and also the activation function does not need to be the same for all vertices. If for all vertices in the hidden layers the ReLU activation function is used and , the network is called a deep ReLU network. As common for regression problems, the activation function in the output layer will be the identity. Fig. 1. (Left) A deep neural network with hidden layers and width three computing the function exactly. In each hidden layer the linear activation function is applied to the left and right unit. The units in the middle use the threshold activation function . (Right) A deep ReLU network approximating the function . For the definitions of and see the proof of Theorem 3.
Approximation properties of deep neural networks for composed functions are studied in Bauer and Kohler, 2019, Fan et al., 2020, Horowitz and Mammen, 2007, Kohler and Krzyżak, 2017, Mhaskar and Poggio, 2016, Nakada and Imaizumi, 2020, Poggio et al., 2017 and Schmidt-Hieber, 2019, Schmidt-Hieber, 2020. These approaches do, however, not lead to straightforward constructions of ReLU networks exploiting the specific structure of the KA approximation (3.1)in Lemma 4. To find such a construction, recall that the classical neural network interpretation of the KA representation associates the interior function with the activation function in the first layer (Hecht-Nielsen, 1987). Here, we argue that the interior function can be efficiently approximated by a deep ReLU network. The role of the hidden layers is to retrieve the next bit in the binary representation of the input. Interestingly, some of the proposed network constructions to approximate -smooth functions use a similar idea without making the link to the KA representation, see Section 4 for more details.
Fig. 1 gives the construction of a network computing combining units with linear activation function and threshold activation function . The main idea is that for , we can extract the first bit using and then define . Iterating the procedure allows us to extract and consequently any further binary digit of . The deep neural network DNN I in Fig. 1 has hidden layers and network width three. The left units in the hidden layer successively build the output value; the units in the middle extract the next bit in the binary representation and the units on the right compute the remainder of the input after bit extraction. To learn the bit extraction algorithm, deep networks lead obviously to much more efficient representations compared to shallow networks.
Constructing networks computing for each and combining them yields a network with hidden layers and network width , computing the interior function in (3.1). The overall number of non-zero parameters is of the order . To approximate a -smooth function by a neural network via the KA approximation (3.1), the interior step makes the approximating network deep but uses only very few parameters compared to the approximation of the univariate function .
A close inspection of the network DNN I in Fig. 1 shows that all linear activation functions get non-negative input and can therefore be replaced by the ReLU activation function without changing the outcome. The threshold activation functions can be arbitrarily well approximated by the linear combination of two ReLU units via for . If one accepts potentially huge network parameters, the network DNN I in Fig. 1 can therefore be approximated by a deep ReLU network with hidden layers and network width four. Consequently, also the construction in (3.1) can be arbitrarily well approximated by deep ReLU networks. It is moreover possible to reduce the size of the network parameters by inserting additional hidden layers in the neural network, see for instance Proposition A.3 in Elbrächter, Perekrestenko, Grohs, and Bölcskei (2019).
Throughout the following we write .
Theorem 3 Let . If there exist and a constant , such that , for all , then, there exists a deep ReLU network with hidden layers, network architecture and all network weights bounded in absolute value by , such that
Proof The proof consists of four parts. In part (A) we construct a ReLU network mimicking the approximand constructed in Lemma 4. For that we first build a ReLU network with architecture imitating the function . In part (B), it is shown that the ReLU network approximation coincides with the function on a subset of with Lebesgue measure . In part (C), we construct a neural network approximation for the outer function in Lemma 4. The approximation error is controlled in part (D). (A): Let be the largest integer such that and set and . Given , we can then define (3.2)There exists a ReLU network with architecture and all network weights bounded in absolute value by computing the function . Similarly, there exists a ReLU network with architecture computing . Since , we have that and . Because of that, we can now concatenate these networks as illustrated in Fig. 1 to construct a deep ReLU network computing . Recall that computing from requires an extra layer with two nodes that is not shown in Fig. 1. Thus, any arrow, except for the ones pointing to the output, adds one additional hidden layer to the ReLU network. The overall number of hidden layers is thus . Because of the two additional nodes in the non-displayed hidden layers, the width in all hidden layers is four and thus the overall architecture of this deep ReLU network is . By checking all edges, it can be seen that all network weights are bounded by . (B): Recall that . We now show that on a large subset of the unit interval, it holds that and for all and therefore also . We have that , whenever . Set . If , then, . Thus, if , , and , then, (3.2) implies and . Hence, the deep ReLU network constructed in part (A) computes the function exactly on the set . For fixed , the set is an interval of length . As there are possibilities to choose , the complement can be written as the union of subintervals of length . The Lebesgue measure of is therefore bounded by . Since , we find that has Lebesgue measure bounded by . This completes the proof for part (B). (C): Now we construct a shallow ReLU network interpolating the outer function in Lemma 4 at the points . Denote these points by . For any , The function can therefore be represented on by a shallow ReLU network with units in the hidden layer. Moreover, for all . Finally, we bound the size of the network weights. We have . By Lemma 4, . Since and for any positive , , we conclude that all network weights can be chosen to be smaller than . Fig. 2. Construction of the deep ReLU network in part (D) of the proof for Theorem 3. (D): Fig. 2 shows how the neural networks and can be combined into a deep ReLU network with architecture and all network weights bounded in absolute value by computing the function . Since , the interpolation property implies that . Together with (B), we conclude that As shown in Lemma 4, . Since is a piecewise linear interpolation of , we also have . As shown in (B), the Lebesgue measure of is bounded by . Decomposing the integral and using the approximation bound in Lemma 4, using for the last inequality that for all and all . □
Recall that for a function class with parameters, the expected optimal approximation rate for a -smooth function in dimensions is . The previous theorem leads to the rate using of the order of network parameters. This coincides thus with the expected rate. In contrast to several other constructions, no network sparsity is required to recover the rate. It is unclear whether the construction can be generalized to higher order smoothness or anisotropic smoothness.
The function approximation in Lemma 4 is quite similar to tree-based methods in statistical learning. CART or MARS, for instance, selects a partition of the input space by making successive splits along different directions and then fits a piecewise constant (or piecewise linear) function on the selected partition (Hastie, Tibshirani, & Friedman, 2009 Section 9.2). The KA approximation is also piecewise constant and the interior function assigns a unique value to each set in the dyadic partition. Enlarging refines the partition. The deep ReLU network constructed in the proof of Theorem 3 imitates the KA approximation and also relies on a dyadic partition of the input space. By changing the network parameters in the first layers, the unit cube can be split into more general subsets and similar function systems as the ones underlying MARS or CART can be generated using deep ReLU networks, see also Eckle and Schmidt-Hieber (2019) and Kohler, Krzyzak, and Langer (2019).
As typical for neural network constructions that decompose function approximation into a localization and a local approximation step, the deep ReLU network in Theorem 3 only depends on the represented function via the weights in the last hidden layer. As a consequence, one could use this deep ReLU network construction to initialize stochastic gradient descent. For that, it is natural to sample the weights in the output layer from a given distribution and assign all other network parameters to the corresponding value in the network construction. A comparison with standard network initializations will be addressed in future work.
The fact that in the proposed network construction only the output layer depends on the represented function matches also with the observation that in deep learning a considerable amount of information about the represented function is decoded in the last layer. This is exploited in pre-training where a trained deep network from a different classification problem is taken and only the output layer is learned by the new dataset, see for instance Zeiler and Fergus (2014). The fact that pre-training works show that deep networks build rather generic function systems in the first layers. For real datasets, the learned parameters in the first hidden layers still exhibit some dependence on the underlying problem and transfer learning updating all weights based on the new data outperforms pre-training (He, Girshick, & Dollar, 2019).
4. Related literature
The section is intended to provide a brief overview of related approaches.
Shen et al. (2020) propose a similar deep ReLU network construction without making a link to the KA representation or space-filling curves. The similarity between both approaches can be best seen in their Fig. 5 or the outline of the proof for Theorem 2.1 in Section 3.2. Indeed, in a first step, the input space is partitioned into smaller hypercubes that are enumerated. The first hidden layers map the input to the index of the hypercube. This localization step is closely related to the action of the interior function used here. The last hidden layers of the deep ReLU network perform a piecewise linear approximation and this is essentially the same as the implementation of the outer function in the modified KA representation in this paper. To ensure good smoothness properties, Shen et al. (2020) also include gaps in the indexing, that fulfill a similar role as the gaps in the Cantor set here. Lu, Shen, Yang, and Zhang (2020) combine the approach with local Taylor expansions and achieves optimal approximation rates for functions that are smoother than Lipschitz.
Another direction is to search for activation function with good representation property based on the modified KA representation in Maiorov and Pinkus (1999). Their Theorem 4 states that one can find a real analytic, strictly increasing, and sigmoidal activation function , such that for any continuous function and any , there exist parameters , satisfying This removes the dependence of the outer activation function in the KA representation on the represented function . The main issue is that despite its smoothness properties, the activation function is not computable and it is unclear how to transfer the result to popular activation functions such as the ReLU. One step in this direction has been done in the recent work Guliyev and Ismailov (2018) proving that one can design computable activation functions with complexity increasing as . Shen, Yang, and Zhang (2020) show that for a neural network with three hidden layers and three explicit and relatively simple but non-differentiable activation functions one can achieve extremely fast approximation rates.
The fact that deep networks can do bit encoding and decoding efficiently has been used previously in Bartlett, Harvey, Liaw, and Mehrabian (2019) to prove (nearly) sharp bounds for the VC dimension of deep ReLU networks and also in Yarotsky (2018) and Yarotsky and Zhevnerchuk (2019) for a different construction to obtain approximation rates of very deep networks with fixed with. There are, however, several distinctive differences. These works employ bit encoding to compress several function values in one number (see for instance Section 5.2.1 in Yarotsky, 2018), while we apply bit extraction to the input vector. In our approach the bit extraction leads to a localization of the input space and the function values only enters in the last hidden layer. It can be checked that in our construction the weight assignment is continuous, that is, small changes in the represented function will lead to small changes in the network weights. On the contrary, bit encoding in the function values results in discontinuous weight assignment and it is known that this is unavoidable for efficient function approximation based on very deep ReLU networks (Yarotsky, 2018).
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References
- Bader, 2013Space-filling curves, Texts in computational science and engineering, Vol. 9, Springer, Heidelberg (2013), p. xiv+278
- Bartlett et al., 2019Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networksJournal of Machine Learning Research, 20 (2019), pp. 1-17
- Bauer and Kohler, 2019On deep learning as a remedy for the curse of dimensionality in nonparametric regressionThe Annals of Statistics, 47 (4) (2019), pp. 2261-2285
- Braun, 2009An application of Kolmogorov’s superposition theorem to function reconstruction in higher dimensions(Ph.D. thesis)Universität Bonn (2009)
- Braun and Griebel, 2009On a constructive proof of Kolmogorov’s superposition theoremConstructive Approximation, 30 (3) (2009), pp. 653-675
- Eckle and Schmidt-Hieber, 2019A comparison of deep networks with ReLU activation function and linear spline-type methodsNeural Networks, 110 (2019), pp. 232-242
- Elbrächter et al., 2019Deep neural network approximation theory(2019)arXiv e-prints, arXiv:1901.02220
- Fan et al., 2020A theoretical analysis of deep Q-learningBayen A.M., Jadbabaie A., Pappas G., Parrilo P.A., Recht B., Tomlin C., Zeilinger M. (Eds.), Proceedings of the 2nd conference on learning for dynamics and control, Proceedings of machine learning research, Vol. 120, PMLR (2020), pp. 486-489
- Girosi and Poggio, 1989Representation properties of networks: Kolmogorov’s theorem is irrelevantNeural Computation, 1 (4) (1989), pp. 465-469
- Gorchakov and Mozolenko, 2019Analysis of approaches to the universal approximation of a continuous function using Kolmogorov’s superposition2019 international conference on engineering and telecommunication (2019), pp. 1-4, 10.1109/EnT47717.2019.9030591
- Gordon et al., 2002On the best approximation by ridge functions in the uniform normConstructive Approximation, 18 (1) (2002), pp. 61-85
- Guliyev and Ismailov, 2018Approximation capability of two hidden layer feedforward neural networks with fixed weightsNeurocomputing, 316 (2018), pp. 262-269
- Hastie et al., 2009The elements of statistical learning (2nd ed.), Springer series in statistics, Springer, New York (2009), p. xxii+745
- He et al., 2019He, K., Girshick, R., & Dollar, P. (2019). Rethinking ImageNet pre-training, In The IEEE international conference on computer vision.
- Hecht-Nielsen, 1987Kolmogorov’s mapping neural network existence theoremProceedings of the IEEE first international conference on neural networks, Vol. III, IEEE, Piscataway, NJ (1987), pp. 11-13
- Horowitz and Mammen, 2007Rate-optimal estimation for a general class of nonparametric regression models with unknown link functionsThe Annals of Statistics, 35 (6) (2007), pp. 2589-2619
- Kohler and Krzyżak, 2017Nonparametric regression based on hierarchical interaction modelsIEEE Transaction on Information Theory, 63 (3) (2017), pp. 1620-1630
- Kohler et al., 2019Estimation of a function of low local dimensionality by deep neural networks(2019)arXiv e-prints, arXiv:1908.11140
- Kolmogorov, 1957On the representation of continuous functions of many variables by superposition of continuous functions of one variable and additionDoklady Akademii Nauk SSSR, 114 (1957), pp. 953-956
- KupersKupers, A. On space-filling curves and the Hahn-Mazurkiewicz theorem. Unpublished manuscript.
- Kurkova, 1991Kolmogorov’s theorem is relevantNeural Computation, 3 (4) (1991), pp. 617-622
- Kurkova, 1992Kolmogorov’s theorem and multilayer neural networksNeural Networks, 5 (3) (1992), pp. 501-506
- Lu et al., 2020Deep network approximation for smooth functions(2020)arXiv e-prints, arXiv:2001.03040
- Maiorov, 1999On best approximation by ridge functionsJournal of Approximation Theory, 99 (1) (1999), pp. 68-94
- Maiorov et al., 1999On the approximation of functional classes equipped with a uniform measure using ridge functionsJournal of Approximation Theory, 99 (1) (1999), pp. 95-111
- Maiorov and Pinkus, 1999Lower bounds for approximation by MLP neural networksNeurocomputing, 25 (1) (1999), pp. 81-91
- Mhaskar and Poggio, 2016Deep vs. shallow networks: An approximation theory perspectiveAnalysis and Applications, 14 (06) (2016), pp. 829-848
- Montanelli and Yang, 2020Error bounds for deep ReLU networks using the Kolmogorov–Arnold superposition theoremNeural Networks, 129 (2020), pp. 1-6
- Nakada and Imaizumi, 2020Adaptive approximation and generalization of deep neural network with intrinsic dimensionalityJournal of Machine Learning Research, 21 (174) (2020), pp. 1-38
- Poggio et al., 2017Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A reviewInternational Journal of Automation and Computing, 14 (5) (2017), pp. 503-519
- Schmidt-Hieber, 2019Deep relu network approximation of functions on a manifold(2019)arXiv e-prints, arXiv:1908.00695
- Schmidt-Hieber, 2020Nonparametric regression using deep neural networks with ReLU activation functionThe Annals of Statistics, 48 (4) (2020), pp. 1875-1897
- Shen et al., 2020Deep network approximation characterized by number of neuronsCommunications in Computational Physics, 28 (5) (2020), pp. 1768-1811
- Shen et al., 2020Neural network approximation: Three hidden layers are enough(2020)arXiv e-prints, arXiv:2010.14075
- Siegelmann and Sontag, 1994Analog computation via neural networksTheoretical Computer Science, 131 (2) (1994), pp. 331-360
- Sprecher, 1965On the structure of continuous functions of several variablesTransactions of the American Mathematical Society, 115 (1965), pp. 340-355
- Sprecher, 1996A numerical implementation of Kolmogorov’s superpositionsNeural Networks, 9 (5) (1996), pp. 765-772
- Sprecher, 1997A numerical implementation of Kolmogorov’s superpositions IINeural Networks, 10 (3) (1997), pp. 447-457
- Sprecher and Draghici, 2002Space-filling curves and Kolmogorov superposition-based neural networksNeural Networks, 15 (1) (2002), pp. 57-67
- Yarotsky, 2018Optimal approximation of continuous functions by very deep ReLU networksBubeck S., Perchet V., Rigollet P. (Eds.), Proceedings of the 31st conference on learning theory, Proceedings of machine learning research, Vol. 75 (2018), pp. 639-649
- Yarotsky and Zhevnerchuk, 2019The phase diagram of approximation rates for deep neural networks(2019)arXiv e-prints, arXiv:1906.09477
- Zeiler and Fergus, 2014Visualizing and understanding convolutional networksFleet D., Pajdla T., Schiele B., Tuytelaars T. (Eds.), Computer vision, Springer International Publishing, Cham (2014), pp. 818-833
Cited by (43)
On the Kolmogorov neural networks
2024, Neural NetworksApproximation bounds for norm constrained neural networks with applications to regression and GANs
2023, Applied and Computational Harmonic AnalysisA three layer neural network can represent any multivariate function
2023, Journal of Mathematical Analysis and ApplicationsA finite-element-informed neural network for parametric simulation in structural mechanics
2023, Finite Elements in Analysis and DesignNondeterministic High-Cycle Fatigue Macromodel Updating and Failure Probability Analysis of Welded Joints of Long-Span Structures
2024, ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering
- ☆
The research has been supported by the Dutch STAR network and a Vidi grant from the Dutch science organization (NWO), The Netherlands . This work was done while the author was visiting the Simons Institute for the Theory of Computing. The constructive comments and suggestions shared by the associate editor and the three referees resulted in a significantly improved version of the article. The author wants to thank Matus Telgarsky for helpful remarks and pointing to the article Siegelmann and Sontag (1994).