The study of universal approximation properties (UAP) for neural networks
(NN) has a long history. When the network width is unlimited, only a single
hidden layer is sufficient for UAP. In contrast, when the depth is unlimited,
the width for UAP needs to be not less than the critical width
wminββ=max(dxβ,dyβ), where dxβ and dyβ are the dimensions of the
input and output, respectively. Recently, \cite{cai2022achieve} shows that a
leaky-ReLU NN with this critical width can achieve UAP for Lp functions on a
compact domain K, \emph{i.e.,} the UAP for Lp(K,Rdyβ). This
paper examines a uniform UAP for the function class C(K,Rdyβ) and
gives the exact minimum width of the leaky-ReLU NN as
wminβ=max(dxβ+1,dyβ)+1dyβ=dxβ+1β, which involves the effects of the
output dimensions. To obtain this result, we propose a novel
lift-flow-discretization approach that shows that the uniform UAP has a deep
connection with topological theory.Comment: ICML2023 camera read