25 research outputs found

    Mixed-Signal Neural Network Implementation with Programmable Neuron

    Get PDF
    This thesis introduces implementation of mixed-signal building blocks of an artificial neural network; namely the neuron and the synaptic multiplier. This thesis, also, investigates the nonlinear dynamic behavior of a single artificial neuron and presents a Distributed Arithmetic (DA)-based Finite Impulse Response (FIR) filter. All the introduced structures are designed and custom laid out

    Efficient hardware implementations of bio-inspired networks

    Get PDF
    The human brain, with its massive computational capability and power efficiency in small form factor, continues to inspire the ultimate goal of building machines that can perform tasks without being explicitly programmed. In an effort to mimic the natural information processing paradigms observed in the brain, several neural network generations have been proposed over the years. Among the neural networks inspired by biology, second-generation Artificial or Deep Neural Networks (ANNs/DNNs) use memoryless neuron models and have shown unprecedented success surpassing humans in a wide variety of tasks. Unlike ANNs, third-generation Spiking Neural Networks (SNNs) closely mimic biological neurons by operating on discrete and sparse events in time called spikes, which are obtained by the time integration of previous inputs. Implementation of data-intensive neural network models on computers based on the von Neumann architecture is mainly limited by the continuous data transfer between the physically separated memory and processing units. Hence, non-von Neumann architectural solutions are essential for processing these memory-intensive bio-inspired neural networks in an energy-efficient manner. Among the non-von Neumann architectures, implementations employing non-volatile memory (NVM) devices are most promising due to their compact size and low operating power. However, it is non-trivial to integrate these nanoscale devices on conventional computational substrates due to their non-idealities, such as limited dynamic range, finite bit resolution, programming variability, etc. This dissertation demonstrates the architectural and algorithmic optimizations of implementing bio-inspired neural networks using emerging nanoscale devices. The first half of the dissertation focuses on the hardware acceleration of DNN implementations. A 4-layer stochastic DNN in a crossbar architecture with memristive devices at the cross point is analyzed for accelerating DNN training. This network is then used as a baseline to explore the impact of experimental memristive device behavior on network performance. Programming variability is found to have a critical role in determining network performance compared to other non-ideal characteristics of the devices. In addition, noise-resilient inference engines are demonstrated using stochastic memristive DNNs with 100 bits for stochastic encoding during inference and 10 bits for the expensive training. The second half of the dissertation focuses on a novel probabilistic framework for SNNs using the Generalized Linear Model (GLM) neurons for capturing neuronal behavior. This work demonstrates that probabilistic SNNs have comparable perform-ance against equivalent ANNs on two popular benchmarks - handwritten-digit classification and human activity recognition. Considering the potential of SNNs in energy-efficient implementations, a hardware accelerator for inference is proposed, termed as Spintronic Accelerator for Probabilistic SNNs (SpinAPS). The learning algorithm is optimized for a hardware friendly implementation and uses first-to-spike decoding scheme for low latency inference. With binary spintronic synapses and digital CMOS logic neurons for computations, SpinAPS achieves a performance improvement of 4x in terms of GSOPS/W/mm2^2 when compared to a conventional SRAM-based design. Collectively, this work demonstrates the potential of emerging memory technologies in building energy-efficient hardware architectures for deep and spiking neural networks. The design strategies adopted in this work can be extended to other spike and non-spike based systems for building embedded solutions having power/energy constraints

    Brain Dynamics From Mathematical Perspectives: A Study of Neural Patterning

    Get PDF
    The brain is the central hub regulating thought, memory, vision, and many other processes occurring within the body. Neural information transmission occurs through the firing of billions of connected neurons, giving rise to a rich variety of complex patterning. Mathematical models are used alongside direct experimental approaches in understanding the underlying mechanisms at play which drive neural activity, and ultimately, in understanding how the brain works. This thesis focuses on network and continuum models of neural activity, and computational methods used in understanding the rich patterning that arises due to the interplay between non-local coupling and local dynamics. It advances the understanding of patterning in both cortical and sub-cortical domains by utilising the neural field framework in the modelling and analysis of thalamic tissue – where cellular currents are important in shaping the tissue firing response through the post-inhibitory rebound phenomenon – and of cortical tissue. The rich variety of patterning exhibited by different neural field models is demonstrated through a mixture of direct numerical simulation, as well as via a numerical continuation approach and an analytical study of patterned states such as synchrony, spatially extended periodic orbits, bumps, and travelling waves. Linear instability theory about these patterns is developed and used to predict the points at which solutions destabilise and alternative emergent patterns arise. Models of thalamic tissue often exhibit lurching waves, where activity travels across the domain in a saltatory manner. Here, a direct mechanism, showing the birth of lurching waves at a Neimark-Sacker-type instability of the spatially synchronous periodic orbit, is presented. The construction and stability analyses carried out in this thesis employ techniques from non-smooth dynamical systems (such as saltation methods) to treat the Heaviside nature of models. This is often coupled with an Evans function approach to determine the linear stability of patterned states. With the ever-increasing complexity of neural models that are being studied, there is a need to develop ways of systematically studying the non-trivial patterns they exhibit. Computational continuation methods are developed, allowing for such a study of periodic solutions and their stability across different parameter regimes, through the use of Newton-Krylov solvers. These techniques are complementary to those outlined above. Using these methods, the relationship between the speed of synaptic transmission and the emergent properties of periodic and travelling periodic patterns such as standing waves and travelling breathers is studied. Many different dynamical systems models of physical phenomena are amenable to analysis using these general computational methods (as long as they have the property that they are sufficiently smooth), and as such, their domain of applicability extends beyond the realm of mathematical neuroscience

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    Brain Dynamics From Mathematical Perspectives: A Study of Neural Patterning

    Get PDF
    The brain is the central hub regulating thought, memory, vision, and many other processes occurring within the body. Neural information transmission occurs through the firing of billions of connected neurons, giving rise to a rich variety of complex patterning. Mathematical models are used alongside direct experimental approaches in understanding the underlying mechanisms at play which drive neural activity, and ultimately, in understanding how the brain works. This thesis focuses on network and continuum models of neural activity, and computational methods used in understanding the rich patterning that arises due to the interplay between non-local coupling and local dynamics. It advances the understanding of patterning in both cortical and sub-cortical domains by utilising the neural field framework in the modelling and analysis of thalamic tissue – where cellular currents are important in shaping the tissue firing response through the post-inhibitory rebound phenomenon – and of cortical tissue. The rich variety of patterning exhibited by different neural field models is demonstrated through a mixture of direct numerical simulation, as well as via a numerical continuation approach and an analytical study of patterned states such as synchrony, spatially extended periodic orbits, bumps, and travelling waves. Linear instability theory about these patterns is developed and used to predict the points at which solutions destabilise and alternative emergent patterns arise. Models of thalamic tissue often exhibit lurching waves, where activity travels across the domain in a saltatory manner. Here, a direct mechanism, showing the birth of lurching waves at a Neimark-Sacker-type instability of the spatially synchronous periodic orbit, is presented. The construction and stability analyses carried out in this thesis employ techniques from non-smooth dynamical systems (such as saltation methods) to treat the Heaviside nature of models. This is often coupled with an Evans function approach to determine the linear stability of patterned states. With the ever-increasing complexity of neural models that are being studied, there is a need to develop ways of systematically studying the non-trivial patterns they exhibit. Computational continuation methods are developed, allowing for such a study of periodic solutions and their stability across different parameter regimes, through the use of Newton-Krylov solvers. These techniques are complementary to those outlined above. Using these methods, the relationship between the speed of synaptic transmission and the emergent properties of periodic and travelling periodic patterns such as standing waves and travelling breathers is studied. Many different dynamical systems models of physical phenomena are amenable to analysis using these general computational methods (as long as they have the property that they are sufficiently smooth), and as such, their domain of applicability extends beyond the realm of mathematical neuroscience

    Energy-Efficient Neural Network Hardware Design and Circuit Techniques to Enhance Hardware Security

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2019. Major: Electrical Engineering. Advisor: Chris Kim. 1 computer file (PDF); ix, 108 pages.Artificial intelligence (AI) algorithms and hardware are being developed at a rapid pace for emerging applications such as self-driving cars, speech/image/video recognition, deep learning, etc. Today’s AI tasks are mostly performed at remote datacenters, while in the future, more AI workloads are expected to run on edge devices. To fulfill this goal, innovative design techniques are needed to improve energy-efficiency, form factor, and as well as the security of AI chips. In this dissertation, two topics are focused on to address these challenges: building energy-efficient AI chips based on various neural network architectures, and designing “chip fingerprint” circuits as well as counterfeit chip sensors to improve hardware security. First of all, in order to deploy AI tasks on edge devices, we come up with various energy and area efficient computing platforms. One is a novel time-domain computing scheme for fully connected multi-layer perceptron (MLP) neural network and the other is an efficient binarized architecture for long short-term memory (LSTM) neural network. Secondly, to enhance the hardware security and ensure secure data communication between edge devices, we need to make sure the authenticity of the chip. Physical Unclonable Function (PUF) is a circuit primitive that can serve as a chip “fingerprint” by generating a unique ID for each chip. Another source of security concerns comes from the counterfeit ICs, and recycled and remarked ICs account for more than 80% of the counterfeit electronics. To effectively detect those counterfeit chips that have been physically compromised, we came up with a passive IC tamper sensor. This proposed sensor is demonstrated to be able to efficiently and reliably detect suspicious activities such as high temperature cycling, ambient humidity rise, and increased dust particles in the chip cavity

    Applied Mathematics and Computational Physics

    Get PDF
    As faster and more efficient numerical algorithms become available, the understanding of the physics and the mathematical foundation behind these new methods will play an increasingly important role. This Special Issue provides a platform for researchers from both academia and industry to present their novel computational methods that have engineering and physics applications

    18th IEEE Workshop on Nonlinear Dynamics of Electronic Systems: Proceedings

    Get PDF
    Proceedings of the 18th IEEE Workshop on Nonlinear Dynamics of Electronic Systems, which took place in Dresden, Germany, 26 – 28 May 2010.:Welcome Address ........................ Page I Table of Contents ........................ Page III Symposium Committees .............. Page IV Special Thanks ............................. Page V Conference program (incl. page numbers of papers) ................... Page VI Conference papers Invited talks ................................ Page 1 Regular Papers ........................... Page 14 Wednesday, May 26th, 2010 ......... Page 15 Thursday, May 27th, 2010 .......... Page 110 Friday, May 28th, 2010 ............... Page 210 Author index ............................... Page XII

    Engineering Education and Research Using MATLAB

    Get PDF
    MATLAB is a software package used primarily in the field of engineering for signal processing, numerical data analysis, modeling, programming, simulation, and computer graphic visualization. In the last few years, it has become widely accepted as an efficient tool, and, therefore, its use has significantly increased in scientific communities and academic institutions. This book consists of 20 chapters presenting research works using MATLAB tools. Chapters include techniques for programming and developing Graphical User Interfaces (GUIs), dynamic systems, electric machines, signal and image processing, power electronics, mixed signal circuits, genetic programming, digital watermarking, control systems, time-series regression modeling, and artificial neural networks

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF
    corecore