4,262 research outputs found

    Integer Echo State Networks: Hyperdimensional Reservoir Computing

    Full text link
    We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl

    A Bio-Inspired Two-Layer Mixed-Signal Flexible Programmable Chip for Early Vision

    Get PDF
    A bio-inspired model for an analog programmable array processor (APAP), based on studies on the vertebrate retina, has permitted the realization of complex programmable spatio-temporal dynamics in VLSI. This model mimics the way in which images are processed in the visual pathway, what renders a feasible alternative for the implementation of early vision tasks in standard technologies. A prototype chip has been designed and fabricated in 0.5 μm CMOS. It renders a computing power per silicon area and power consumption that is amongst the highest reported for a single chip. The details of the bio-inspired network model, the analog building block design challenges and trade-offs and some functional tests results are presented in this paper.Office of Naval Research (USA) N-000140210884European Commission IST-1999-19007Ministerio de Ciencia y Tecnología TIC1999-082

    Current-Mode Techniques for the Implementation of Continuous- and Discrete-Time Cellular Neural Networks

    Get PDF
    This paper presents a unified, comprehensive approach to the design of continuous-time (CT) and discrete-time (DT) cellular neural networks (CNN) using CMOS current-mode analog techniques. The net input signals are currents instead of voltages as presented in previous approaches, thus avoiding the need for current-to-voltage dedicated interfaces in image processing tasks with photosensor devices. Outputs may be either currents or voltages. Cell design relies on exploitation of current mirror properties for the efficient implementation of both linear and nonlinear analog operators. These cells are simpler and easier to design than those found in previously reported CT and DT-CNN devices. Basic design issues are covered, together with discussions on the influence of nonidealities and advanced circuit design issues as well as design for manufacturability considerations associated with statistical analysis. Three prototypes have been designed for l.6-pm n-well CMOS technologies. One is discrete-time and can be reconfigured via local logic for noise removal, feature extraction (borders and edges), shadow detection, hole filling, and connected component detection (CCD) on a rectangular grid with unity neighborhood radius. The other two prototypes are continuous-time and fixed template: one for CCD and other for noise removal. Experimental results are given illustrating performance of these prototypes

    Bio-inspired analog parallel array processor chip with programmable spatio-temporal dynamics

    Get PDF
    A bio-inspired model for an analog parallel array processor (APAP), based on studies on the vertebrate retina, permits the realization of complex spatio-temporal dynamics in VLSI. This model mimics the way in which images are processed in the natural visual pathway which renders a feasible alternative for the implementation of early vision tasks in standard technologies. A prototype chip has been designed and fabricated in 0.5 /spl mu/m CMOS. Design challenges, trade-offs and the building blocks of such a high-complexity system (0.5/spl times/10/sup 6/ transistors, most of them operating in analog mode) are presented in this paper.Comisión Interministerial de Ciencia y Tecnología TIC1999-082

    Energy efficient hybrid computing systems using spin devices

    Get PDF
    Emerging spin-devices like magnetic tunnel junctions (MTJ\u27s), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ∼20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode\u27 processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ∼100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters

    A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks

    Full text link
    In this paper, we address the stability of a broad class of discrete-time hypercomplex-valued Hopfield-type neural networks. To ensure the neural networks belonging to this class always settle down at a stationary state, we introduce novel hypercomplex number systems referred to as real-part associative hypercomplex number systems. Real-part associative hypercomplex number systems generalize the well-known Cayley-Dickson algebras and real Clifford algebras and include the systems of real numbers, complex numbers, dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as particular instances. Apart from the novel hypercomplex number systems, we introduce a family of hypercomplex-valued activation functions called B\mathcal{B}-projection functions. Broadly speaking, a B\mathcal{B}-projection function projects the activation potential onto the set of all possible states of a hypercomplex-valued neuron. Using the theory presented in this paper, we confirm the stability analysis of several discrete-time hypercomplex-valued Hopfield-type neural networks from the literature. Moreover, we introduce and provide the stability analysis of a general class of Hopfield-type neural networks on Cayley-Dickson algebras

    Mixed-signal CNN array chips for image processing

    Get PDF
    Due to their local connectivity and wide functional capabilities, cellular nonlinear networks (CNN) are excellent candidates for the implementation of image processing algorithms using VLSI analog parallel arrays. However, the design of general purpose, programmable CNN chips with dimensions required for practical applications raises many challenging problems to analog designers. This is basically due to the fact that large silicon area means large development cost, large spatial deviations of design parameters and low production yield. CNN designers must face different issues to keep reasonable enough accuracy level and production yield together with reasonably low development cost in their design of large CNN chips. This paper outlines some of these major issues and their solutions

    Ultra Low Energy Analog Image Processing Using Spin Neurons

    Full text link
    In this work we present an ultra low energy, 'on-sensor' image processing architecture, based on cellular array of spin based neurons. The 'neuron' constitutes of a lateral spin valve (LSV) with multiple input magnets, connected to an output magnet, using metal channels. The low resistance, magneto-metallic neurons operate at a small terminal voltage of ~20mV, while performing analog computation upon photo sensor inputs. The static current-flow across the device terminals is limited to small periods, corresponding to magnet switching time, and, is determined by a low duty-cycle system-clock. Thus, the energy-cost of analog-mode processing, inevitable in most image sensing applications, is reduced and made comparable to that of dynamic and leakage power consumption in peripheral CMOS units. Performance of the proposed architecture for some common image sensing and processing applications like, feature extraction, halftone compression and digitization, have been obtained through physics based device simulation framework, coupled with SPICE. Results indicate that the proposed design scheme can achieve more than two orders of magnitude reduction in computation energy, as compared to the state of art CMOS designs, that are based on conventional mixed-signal image acquisition and processing schemes. To the best of authors' knowledge, this is the first work where application of nano magnets (in LSV's) in analog signal processing has been proposed

    Highly-Parallel, Highly-Compact Computing Structures Implemented in Nanotechnology

    Get PDF
    In this paper, we describe work in which we are evaluating how the evolving properties of nano-electronic devices could best be utilized in highly parallel computing structures. Because of their combination of high performance, low power, and extreme compactness, such structures would have obvious applications in spaceborne environments, both for general mission control and for on-board data analysis. However, the anticipated properties of nano-devices mean that the optimum architecture for such systems is by no means certain. Candidates include single instruction multiple datastream (SIMD) arrays, neural networks, and multiple instruction multiple datastream (MIMD) assemblies
    corecore