612 research outputs found

    Learning in neuro/fuzzy analog chips

    Get PDF
    This paper focus on the design of adaptive mixed-signal fuzzy chips. These chips have parallel architecture and feature electrically-controlable surface maps. The design methodology is based on the use of composite transistors - modular and well suited for design automation. This methodology is supported by dedicated, hardware-compatible learning algorithms that combine weight-perturbation and outstar

    Neural networks : analog VLSI implementation and learning algorithms

    Get PDF

    Neuro-memristive Circuits for Edge Computing: A review

    Full text link
    The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing

    Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

    Full text link
    We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth

    Digitally Programmable Analogue Circuits for Sensor Conditioning Systems

    Get PDF
    This work presents two current-mode integrated circuits designed for sensor signal preprocessing in embedded systems. The proposed circuits have been designed to provide good signal transfer and fulfill their function, while minimizing the load effects due to building complex conditioning architectures. The processing architecture based on the proposed building blocks can be reconfigured through digital programmability. Thus, sensor useful range can be expanded, changes in the sensor operation can be compensated for and furthermore, undesirable effects such as device mismatching and undesired physical magnitudes sensor sensibilities are reduced. The circuits were integrated using a 0.35 μm standard CMOS process. Experimental measurements, load effects and a study of two different tuning strategies are presented. From these results, system performance is tested in an application which entails extending the linear range of a magneto-resistive sensor. Circuit area, average power consumption and programmability features allow these circuits to be included in embedded sensing systems as a part of the analogue conditioning components
    corecore