Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation

Abstract

The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width wminβ‘βˆ—=max⁑(dx,dy)w^*_{\min}=\max(d_x,d_y), where dxd_x and dyd_y are the dimensions of the input and output, respectively. Recently, \cite{cai2022achieve} shows that a leaky-ReLU NN with this critical width can achieve UAP for LpL^p functions on a compact domain KK, \emph{i.e.,} the UAP for Lp(K,Rdy)L^p(K,\mathbb{R}^{d_y}). This paper examines a uniform UAP for the function class C(K,Rdy)C(K,\mathbb{R}^{d_y}) and gives the exact minimum width of the leaky-ReLU NN as wmin⁑=max⁑(dx+1,dy)+1dy=dx+1w_{\min}=\max(d_x+1,d_y)+1_{d_y=d_x+1}, which involves the effects of the output dimensions. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory.Comment: ICML2023 camera read

    Similar works

    Full text

    thumbnail-image

    Available Versions