183 research outputs found

    Neuro-memristive Circuits for Edge Computing: A review

    Full text link
    The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing

    Neural Network-Based Analog-to-Digital Converters

    Get PDF
    In this chapter, we present an overview of the recent advances in analog-to-digital converter (ADC) neural networks. Biological neural networks consist of natural binarization reflected by the neurosynaptic processes. This natural analog-to-binary conversion ability of neurons can be modeled to emulate analog-to-digital conversion using a set of nonlinear circuit elements and existing artificial neural network models. Since one neuron during processing consumes on average only about half nanowatts of power, neurons can perform highly energy-efficient operations, including pattern recognition. Analog-to-digital conversion itself is an example of simple pattern recognition where input analog signal can be presented in one of the 2N different patterns for N bits. The classical configuration of neural network-based ADC is Hopfield neural network ADC. Improved designs, such as modified Hopfield network ADC, T-model neural ADC, and multilevel neurons-based neural ADC, will be discussed. In addition, the latest architecture designs of neural ADC such as hybrid complementary metal-oxide semiconductor (CMOS)-memristor Hopfield ADC are covered at the end of this chapter

    Teaching Memory Circuit Elements via Experiment-Based Learning

    Full text link
    The class of memory circuit elements which comprises memristive, memcapacitive, and meminductive systems, is gaining considerable attention in a broad range of disciplines. This is due to the enormous flexibility these elements provide in solving diverse problems in analog/neuromorphic and digital/quantum computation; the possibility to use them in an integrated computing-memory paradigm, massively-parallel solution of different optimization problems, learning, neural networks, etc. The time is therefore ripe to introduce these elements to the next generation of physicists and engineers with appropriate teaching tools that can be easily implemented in undergraduate teaching laboratories. In this paper, we suggest the use of easy-to-build emulators to provide a hands-on experience for the students to learn the fundamental properties and realize several applications of these memelements. We provide explicit examples of problems that could be tackled with these emulators that range in difficulty from the demonstration of the basic properties of memristive, memcapacitive, and meminductive systems to logic/computation and cross-bar memory. The emulators can be built from off-the-shelf components, with a total cost of a few tens of dollars, thus providing a relatively inexpensive platform for the implementation of these exercises in the classroom. We anticipate that this experiment-based learning can be easily adopted and expanded by the instructors with many more case studies.Comment: IEEE Circuits and Systems Magazine (in press

    CMOS and memristive hardware for neuromorphic computing

    Get PDF
    The ever-increasing processing power demands of digital computers cannot continue to be fulfilled indefinitely unless there is a paradigm shift in computing. Neuromorphic computing, which takes inspiration from the highly parallel, low power, high speed, and noise-tolerant computing capabilities of the brain, may provide such a shift. To that end, various aspects of the brain, from its basic building blocks, such as neurons and synapses, to its massively parallel in-memory computing networks have been being studied by the huge neuroscience community. Concurrently, many researchers from across academia and industry have been studying materials, devices, circuits, and systems, to implement some of the functions of networks of neurons and synapses to develop bio-inspired (neuromorphic) computing platforms

    Cryogenic Neuromorphic Hardware

    Full text link
    The revolution in artificial intelligence (AI) brings up an enormous storage and data processing requirement. Large power consumption and hardware overhead have become the main challenges for building next-generation AI hardware. To mitigate this, Neuromorphic computing has drawn immense attention due to its excellent capability for data processing with very low power consumption. While relentless research has been underway for years to minimize the power consumption in neuromorphic hardware, we are still a long way off from reaching the energy efficiency of the human brain. Furthermore, design complexity and process variation hinder the large-scale implementation of current neuromorphic platforms. Recently, the concept of implementing neuromorphic computing systems in cryogenic temperature has garnered intense interest thanks to their excellent speed and power metric. Several cryogenic devices can be engineered to work as neuromorphic primitives with ultra-low demand for power. Here we comprehensively review the cryogenic neuromorphic hardware. We classify the existing cryogenic neuromorphic hardware into several hierarchical categories and sketch a comparative analysis based on key performance metrics. Our analysis concisely describes the operation of the associated circuit topology and outlines the advantages and challenges encountered by the state-of-the-art technology platforms. Finally, we provide insights to circumvent these challenges for the future progression of research
    corecore