2 research outputs found
Universal approximation with complex-valued deep narrow neural networks
We study the universality of complex-valued neural networks with bounded
widths and arbitrary depths. Under mild assumptions, we give a full description
of those activation functions that have the
property that their associated networks are universal, i.e., are capable of
approximating continuous functions to arbitrary accuracy on compact domains.
Precisely, we show that deep narrow complex-valued networks are universal if
and only if their activation function is neither holomorphic, nor
antiholomorphic, nor -affine. This is a much larger class of
functions than in the dual setting of arbitrary width and fixed depth. Unlike
in the real case, the sufficient width differs significantly depending on the
considered activation function. We show that a width of is always
sufficient and that in general a width of is necessary. We
prove, however, that a width of suffices for a rich subclass of the
admissible activation functions. Here, and denote the input and output
dimensions of the considered networks.Comment: v2: correct typo in arxiv abstrac