29,171 research outputs found

    The PreAmplifier ShAper for the ALICE TPC-Detector

    Full text link
    In this paper the PreAmplifier ShAper (PASA) for the Time Projection Chamber (TPC) of the ALICE experiment at LHC is presented. The ALICE TPC PASA is an ASIC that integrates 16 identical channels, each consisting of Charge Sensitive Amplifiers (CSA) followed by a Pole-Zero network, self-adaptive bias network, two second-order bridged-T filters, two non-inverting level shifters and a start-up circuit. The circuit is optimized for a detector capacitance of 18-25 pF. For an input capacitance of 25 pF, the PASA features a conversion gain of 12.74 mV/fC, a peaking time of 160 ns, a FWHM of 190 ns, a power consumption of 11.65 mW/ch and an equivalent noise charge of 244e + 17e/pF. The circuit recovers smoothly to the baseline in about 600 ns. An integral non-linearity of 0.19% with an output swing of about 2.1 V is also achieved. The total area of the chip is 18 mm2^2 and is implemented in AMS's C35B3C1 0.35 micron CMOS technology. Detailed characterization test were performed on about 48000 PASA circuits before mounting them on the ALICE TPC front-end cards. After more than two years of operation of the ALICE TPC with p-p and Pb-Pb collisions, the PASA has demonstrated to fulfill all requirements

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Exploiting Device Mismatch in Neuromorphic VLSI Systems to Implement Axonal Delays

    Get PDF
    Sheik S, Chicca E, Indiveri G. Exploiting Device Mismatch in Neuromorphic VLSI Systems to Implement Axonal Delays. Presented at the International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia.Axonal delays are used in neural computation to implement faithful models of biological neural systems, and in spiking neural networks models to solve computationally demanding tasks. While there is an increasing number of software simulations of spiking neural networks that make use of axonal delays, only a small fraction of currently existing hardware neuromorphic systems supports them. In this paper we demonstrate a strategy to implement temporal delays in hardware spiking neural networks distributed across multiple Very Large Scale Integration (VLSI) chips. This is achieved by exploiting the inherent device mismatch present in the analog circuits that implement silicon neurons and synapses inside the chips, and the digital communication infrastructure used to configure the network topology and transmit the spikes across chips. We present an example of a recurrent VLSI spiking neural network that employs axonal delays and demonstrate how the proposed strategy efficiently implements them in hardware

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    A modular T-mode design approach for analog neural network hardware implementations

    Get PDF
    A modular transconductance-mode (T-mode) design approach is presented for analog hardware implementations of neural networks. This design approach is used to build a modular bidirectional associative memory network. The authors show that the size of the whole system can be increased by interconnecting more modular chips. It is also shown that by changing the interconnection strategy different neural network systems can be implemented, such as a Hopfield network, a winner-take-all network, a simplified ART1 network, or a constrained optimization network. Experimentally measured results from CMOS 2-ÎŒm double-metal, double-polysilicon prototypes (MOSIS) are presented

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system
    • 

    corecore