706 research outputs found
Theory and Implementation of Complex-Valued Neural Networks
This work explains in detail the theory behind Complex-Valued Neural Network
(CVNN), including Wirtinger calculus, complex backpropagation, and basic
modules such as complex layers, complex activation functions, or complex weight
initialization. We also show the impact of not adapting the weight
initialization correctly to the complex domain. This work presents a strong
focus on the implementation of such modules on Python using cvnn toolbox. We
also perform simulations on real-valued data, casting to the complex domain by
means of the Hilbert Transform, and verifying the potential interest of CVNN
even for non-complex data.Comment: 42 pages, 18 figure
The universal approximation theorem for complex-valued neural networks
We generalize the classical universal approximation theorem for neural
networks to the case of complex-valued neural networks. Precisely, we consider
feedforward networks with a complex activation function in which each neuron performs the operation with weights and
a bias , and with applied componentwise. We
completely characterize those activation functions for which the
associated complex networks have the universal approximation property, meaning
that they can uniformly approximate any continuous function on any compact
subset of arbitrarily well.
Unlike the classical case of real networks, the set of "good activation
functions" which give rise to networks with the universal approximation
property differs significantly depending on whether one considers deep networks
or shallow networks: For deep networks with at least two hidden layers, the
universal approximation property holds as long as is neither a
polynomial, a holomorphic function, or an antiholomorphic function. Shallow
networks, on the other hand, are universal if and only if the real part or the
imaginary part of is not a polyharmonic function
Global μ
The impulsive complex-valued neural networks with three kinds of time delays including leakage delay, discrete delay, and distributed delay are considered. Based on the homeomorphism mapping principle of complex domain, a sufficient condition for the existence and uniqueness of the equilibrium point of the addressed complex-valued neural networks is proposed in terms of linear matrix inequality (LMI). By constructing appropriate Lyapunov-Krasovskii functionals, and employing the free weighting matrix method, several delay-dependent criteria for checking the global μ-stability of the complex-valued neural networks are established in LMIs. As direct applications of these results, several criteria on the exponential stability, power-stability, and log-stability are obtained. Two examples with simulations are provided to demonstrate the effectiveness of the proposed criteria
Complex-valued neural networks for fully-temporal micro-Doppler classification
International audienceMicro-Doppler analysis commonly makes use of the log-scaled, real-valued spectrogram, and recent work involving deep learning architectures for classification are no exception. Some works in neighboring fields of research directly exploit the raw temporal signal, but do not handle complex numbers, which are inherent to radar IQ signals. In this paper, we propose a complex-valued, fully temporal neural network which simultaneously exploits the raw signal and the spectrogram by introducing a Fourier-like layer suitable to deep architectures. We show improved results under certain conditions on synthetic radar data compared to a real-valued counterpart
Towards Understanding Theoretical Advantages of Complex-Reaction Networks
Complex-valued neural networks have attracted increasing attention in recent
years, while it remains open on the advantages of complex-valued neural
networks in comparison with real-valued networks. This work takes one step on
this direction by introducing the \emph{complex-reaction network} with
fully-connected feed-forward architecture. We prove the universal approximation
property for complex-reaction networks, and show that a class of radial
functions can be approximated by a complex-reaction network using the
polynomial number of parameters, whereas real-valued networks need at least
exponential parameters to reach the same approximation level. For empirical
risk minimization, our theoretical result shows that the critical point set of
complex-reaction networks is a proper subset of that of real-valued networks,
which may show some insights on finding the optimal solutions more easily for
complex-reaction networks
Ensemble of Single‐Layered Complex‐Valued Neural Networks for Classification Tasks
This paper presents ensemble approaches in single-layered complex-valued
neural network (CVNN) to solve real-valued classification problems. Each
component CVNN of an ensemble uses a recently proposed activation function
for its complex-valued neurons (CVNs). A gradient-descent based learning
algorithm was used to train the component CVNNs. We applied two ensemble
methods, negative correlation learning and bagging, to create the ensembles.
Experimental results on a number of real-world benchmark problems showed a
substantial performance improvement of the ensembles over the individual
single-layered CVNN classifiers. Furthermore, the generalization performances
were nearly equivalent to those obtained by the ensembles of real-valued
multilayer neural networks
- …