259 research outputs found
CMOS design of chaotic oscillators using state variables: a monolithic Chua's circuit
This paper presents design considerations for monolithic implementation of piecewise-linear (PWL) dynamic systems in CMOS technology. Starting from a review of available CMOS circuit primitives and their respective merits and drawbacks, the paper proposes a synthesis approach for PWL dynamic systems, based on state-variable methods, and identifies the associated analog operators. The GmC approach, combining quasi-linear VCCS's, PWL VCCS's, and capacitors is then explored regarding the implementation of these operators. CMOS basic building blocks for the realization of the quasi-linear VCCS's and PWL VCCS's are presented and applied to design a Chua's circuit IC. The influence of GmC parasitics on the performance of dynamic PWL systems is illustrated through this example. Measured chaotic attractors from a Chua's circuit prototype are given. The prototype has been fabricated in a 2.4- mu m double-poly n-well CMOS technology, and occupies 0.35 mm/sup 2/, with a power consumption of 1.6 mW for a +or-2.5-V symmetric supply. Measurements show bifurcation toward a double-scroll Chua's attractor by changing a bias current
Building a Chaotic Proven Neural Network
International audienceChaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior
Recommended from our members
Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization.
The key operation in stochastic neural networks, which have become the state-of-the-art approach for solving problems in machine learning, information theory, and statistics, is a stochastic dot-product. While there have been many demonstrations of dot-product circuits and, separately, of stochastic neurons, the efficient hardware implementation combining both functionalities is still missing. Here we report compact, fast, energy-efficient, and scalable stochastic dot-product circuits based on either passively integrated metal-oxide memristors or embedded floating-gate memories. The circuit's high performance is due to mixed-signal implementation, while the efficient stochastic operation is achieved by utilizing circuit's noise, intrinsic and/or extrinsic to the memory cell array. The dynamic scaling of weights, enabled by analog memory devices, allows for efficient realization of different annealing approaches to improve functionality. The proposed approach is experimentally verified for two representative applications, namely by implementing neural network for solving a four-node graph-partitioning problem, and a Boltzmann machine with 10-input and 8-hidden neurons
Chaotic particle swarm optimization with neural network structure and its application
Abstract: A new particle swarm optimization (PSO) algorithm having a chaotic Hopfield Neural Network (HNN) structure is proposed. Particles exhibit chaotic behaviour before converging to a stable fixed point which is determined by the best points found by the individual particles and the swarm. During the evolutionary process, the chaotic search expands the search space of individual particles. Using a chaotic system to determine particle weights helps the PSO to escape from the local extreme and find the global optimum. The algorithm is applied to some benchmark problems and a pressure vessel problem with nonlinear constraints. The results show that the proposed algorithm consistently outperforms rival algorithms by enhancing search efficiency and improving search qualit
Neural avalanches at the edge-of-chaos?
Does the brain operate at criticality, to optimize neural computation? Literature uses different fingerprints of criticality in neural networks, leaving the relationship between them mostly unclear. Here, we compare two specific signatures of criticality, and ask whether they refer to observables at the same critical point, or to two differing phase transitions. Using a recurrent spiking neural network, we demonstrate that avalanche criticality does not necessarily lie at edge-of-chaos
Spatiotemporal convolutional network for time-series prediction and causal inference
Making predictions in a robust way is not easy for nonlinear systems. In this
work, a neural network computing framework, i.e., a spatiotemporal
convolutional network (STCN), was developed to efficiently and accurately
render a multistep-ahead prediction of a time series by employing a
spatial-temporal information (STI) transformation. The STCN combines the
advantages of both the temporal convolutional network (TCN) and the STI
equation, which maps the high-dimensional/spatial data to the future temporal
values of a target variable, thus naturally providing the prediction of the
target variable. From the observed variables, the STCN also infers the causal
factors of the target variable in the sense of Granger causality, which are in
turn selected as effective spatial information to improve the prediction
robustness. The STCN was successfully applied to both benchmark systems and
real-world datasets, all of which show superior and robust performance in
multistep-ahead prediction, even when the data were perturbed by noise. From
both theoretical and computational viewpoints, the STCN has great potential in
practical applications in artificial intelligence (AI) or machine learning
fields as a model-free method based only on the observed data, and also opens a
new way to explore the observed high-dimensional data in a dynamical manner for
machine learning.Comment: 23 pages, 6 figure
- …