102 research outputs found
A Theory of Networks for Appxoimation and Learning
Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data
Quantum entanglement and spin control in silicon nanocrystal
Selective coherence control and electrically mediated exchange coupling of
single electron spin between triplet and singlet states using numerically
derived optimal control of proton pulses is demonstrated. We obtained spatial
confinement below size of the Bohr radius for proton spin chain FWHM. Precise
manipulation of individual spins and polarization of electron spin states are
analyzed via proton induced emission and controlled population of energy shells
in pure 29Si nanocrystal. Entangled quantum states of channeled proton
trajectories are mapped in transverse and angular phase space of 29Si axial
channel alignment in order to avoid transversal excitations. Proton density and
proton energy as impact parameter functions are characterized in single
particle density matrix via discretization of diagonal and nearest off-diagonal
elements. We combined high field and low densities (1 MeV/92 nm) to create
inseparable quantum state by superimposing the hyperpolarizationed proton spin
chain with electron spin of 29Si. Quantum discretization of density of states
(DOS) was performed by the Monte Carlo simulation method using numerical
solutions of proton equations of motion. Distribution of gaussian coherent
states is obtained by continuous modulation of individual spin phase and
amplitude. Obtained results allow precise engineering and faithful mapping of
spin states. This would provide the effective quantum key distribution (QKD)
and transmission of quantum information over remote distances between quantum
memory centers for scalable quantum communication network. Furthermore,
obtained results give insights in application of channeled protons subatomic
microscopy as a complete versatile scanning-probe system capable of both
quantum engineering of charged particle states and characterization of quantum
states below diffraction limit linear and in-depth resolution.Comment: 30 pages, 6 figure
Recommended from our members
On automatic synthesis of analog/digital circuits
The paper builds on a recent explicit numerical algorithm for Kolmogorov`s superpositions, and will show that in order to synthesize minimum size (i.e., size-optimal) circuits for implementing any Boolean function, the nonlinear activation function of the gates has to be the identity function. Because classical and--or implementations, as well as threshold gate implementations require exponential size, it follows that size-optimal solutions for implementing arbitrary Boolean functions can be obtained using analog (or mixed analog/digital) circuits. Conclusions and several comments are ending the paper
Recommended from our members
On Kolmogorov's superpositions and Boolean functions
The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper
Analysis Of Kolmogorov\u27s Superposition Theorem And Its Implementation In Applications With Low And High Dimensional Data.
In this dissertation, we analyze Kolmogorov\u27s superposition theorem for high dimensions. Our main goal is to explore and demonstrate the feasibility of an accurate implementation of Kolmogorov\u27s theorem. First, based on Lorentz\u27s ideas, we provide a thorough discussion on the proof and its numerical implementation of the theorem in dimension two. We present computational experiments which prove the feasibility of the theorem in applications of low dimensions (namely, dimensions two and three). Next, we present high dimensional extensions with complete and detailed proofs and provide the implementation that aims at applications with high dimensionality. The amalgamation of these ideas is evidenced by applications in image (two dimensional) and video (three dimensional) representations, the content based image retrieval, video retrieval, de-noising and in-painting, and Bayesian prior estimation of high dimensional data from the fields of computer vision and image processing
Recommended from our members
How to build VLSI-efficient neural chips
This paper presents several upper and lower bounds for the number-of-bits required for solving a classification problem, as well as ways in which these bounds can be used to efficiently build neural network chips. The focus will be on complexity aspects pertaining to neural networks: (1) size complexity and depth (size) tradeoffs, and (2) precision of weights and thresholds as well as limited interconnectivity. They show difficult problems-exponential growth in either space (precision and size) and/or time (learning and depth)-when using neural networks for solving general classes of problems (particular cases may enjoy better performances). The bounds for the number-of-bits required for solving a classification problem represent the first step of a general class of constructive algorithms, by showing how the quantization of the input space could be done in O (m{sup 2}n) steps. Here m is the number of examples, while n is the number of dimensions. The second step of the algorithm finds its roots in the implementation of a class of Boolean functions using threshold gates. It is substantiated by mathematical proofs for the size O (mn/{Delta}), and the depth O [log(mn)/log{Delta}] of the resulting network (here {Delta} is the maximum fan in). Using the fan in as a parameter, a full class of solutions can be designed. The third step of the algorithm represents a reduction of the size and an increase of its generalization capabilities. Extensions by using analogue COMPARISONs, allows for real inputs, and increase the generalization capabilities at the expense of longer training times. Finally, several solutions which can lower the size of the resulting neural network are detailed. The interesting aspect is that they are obtained for limited, or even constant, fan-ins. In support of these claims many simulations have been performed and are called upon
Recommended from our members
2D neural hardware versus 3D biological ones
This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections
Recommended from our members
When constants are important
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff
- …