104 research outputs found

    Implantable Cardioverter Defibrillators

    Get PDF

    An analogue recurrent neural networks for trajectory learning and other industrial applications

    Get PDF
    A real-time analogue recurrent neural network (RNN) can extract and learn the unknown dynamics (and features) of a typical control system such as a robot manipulator. The task at hand is a tracking problem in the presence of disturbances. With reference to the tasks assigned to an industrial robot, one important issue is to determine the motion of the joints and the effector of the robot. In order to model robot dynamics we use a neural network that can be implemented in hardware. The synaptic weights are modelled as variable gain cells that can be implemented with a few MOS transistors. The network output signals portray the periodicity and other characteristics of the input signal in unsupervised mode. For the specific purpose of demonstrating the trajectory learning capabilities, a periodic signal with varying characteristics is used. The developed architecture, however, allows for more general learning tasks typical in applications of identification and control. The periodicity of the input signal ensures convergence of the output to a limit cycle. Online versions of the synaptic update can be formulated using simple CMOS circuits. Because the architecture depends on the network generating a stable limit cycle, and consequently a periodic solution which is robust over an interval of parameter uncertainties, we currently place the restriction of a periodic format for the input signals. The simulated network contains interconnected recurrent neurons with continuous-time dynamics. The system emulates random-direction descent of the error as a multidimensional extension to the stochastic approximation. To achieve unsupervised learning in recurrent dynamical systems we propose a synapse circuit which has a very simple structure and is suitable for implementation in VLSI

    Implementation of neural networks as CMOS integrated circuits

    Get PDF

    Semiconductor Memory Applications in Radiation Environment, Hardware Security and Machine Learning System

    Get PDF
    abstract: Semiconductor memory is a key component of the computing systems. Beyond the conventional memory and data storage applications, in this dissertation, both mainstream and eNVM memory technologies are explored for radiation environment, hardware security system and machine learning applications. In the radiation environment, e.g. aerospace, the memory devices face different energetic particles. The strike of these energetic particles can generate electron-hole pairs (directly or indirectly) as they pass through the semiconductor device, resulting in photo-induced current, and may change the memory state. First, the trend of radiation effects of the mainstream memory technologies with technology node scaling is reviewed. Then, single event effects of the oxide based resistive switching random memory (RRAM), one of eNVM technologies, is investigated from the circuit-level to the system level. Physical Unclonable Function (PUF) has been widely investigated as a promising hardware security primitive, which employs the inherent randomness in a physical system (e.g. the intrinsic semiconductor manufacturing variability). In the dissertation, two RRAM-based PUF implementations are proposed for cryptographic key generation (weak PUF) and device authentication (strong PUF), respectively. The performance of the RRAM PUFs are evaluated with experiment and simulation. The impact of non-ideal circuit effects on the performance of the PUFs is also investigated and optimization strategies are proposed to solve the non-ideal effects. Besides, the security resistance against modeling and machine learning attacks is analyzed as well. Deep neural networks (DNNs) have shown remarkable improvements in various intelligent applications such as image classification, speech classification and object localization and detection. Increasing efforts have been devoted to develop hardware accelerators. In this dissertation, two types of compute-in-memory (CIM) based hardware accelerator designs with SRAM and eNVM technologies are proposed for two binary neural networks, i.e. hybrid BNN (HBNN) and XNOR-BNN, respectively, which are explored for the hardware resource-limited platforms, e.g. edge devices.. These designs feature with high the throughput, scalability, low latency and high energy efficiency. Finally, we have successfully taped-out and validated the proposed designs with SRAM technology in TSMC 65 nm. Overall, this dissertation paves the paths for memory technologies’ new applications towards the secure and energy-efficient artificial intelligence system.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    A low-complexity current-mode WTA circuit based on CMOS Quasi-FG inverters

    Get PDF
    In this paper, a low-complexity current-mode Winner-Take-All circuit (WTA) of O (n) complexity with logical outputs is presented. The proposed approach employs a Quasi-FG Inverter as the key element for current integration and the computing of the winning cell. The design was implemented in a double-poly, three metal layers, 0.5µm CMOS technology. The circuit exhibits a good accuracy-speed tradeoff when compared to other reported WTA architectures

    ASD: çok amaçlı ayarlanabilir sınıflandırıcı devreler

    Get PDF
    Göknar, İzzet Cem (Dogus Author) -- Minaei, Shahram (Dogus Author) -- Yıldız, Merih (Dogus Author)Çalışmada, ayarlanabilir sınıflandırıcı devreleri ve uygulama alanları incelenmiştir. AMS 0.35 μm CMOS prosesi ile, tasarlanan sınıflandırıcı bir tümdevrenin üretimi de yapılmıştır. Bu sınıflandırıcı devresinin kontrol parametrelerinin bulunmasını sağlayan öğrenme algoritmaları çeşitli uygulamalar için geliştirilmiştir. Sınıflandırma işlemleri geliştirilen algoritmalar ve üretilen devre ile İris ve Haberman veri kümelerine uygulanarak sonuçların uyum içinde olduğu gösterilmiştir.TÜBİTA

    VLSI hardware neural accelerator using reduced precision arithmetic

    Get PDF

    VLSI architectures for implementation of neural networks

    Get PDF
    A large scale collective system implementing a specific model for associative memory was described by Hopfield [1]. A circuit model for this operation is illustrated in Figure 1, and consists of three major components. A collection of active gain elements (called amplifiers or "neurons") with gain function V = g(v) are connected by a passive interconnect matrix which provides unidirectional excitatory or inhibitory connections ("synapses") between the output of one neuron and the input to another. The strength of this interconnection is given by the conductance G_(ij) = G_0T_(ij). The requirements placed on the gain function g(v) are not very severe [2], and easily met by VLSI-realizable amplifiers. The third circuit element is the capacitances that determine the time evolution of the system, and are modelled as lumped capacitances. This formulation leads to the equations of motion shown in Figure 2, and to a Liapunov energy function which determines the dynamics of the system, and predicts the location of stable states (memories) in the case of a symmetric matrix T

    Neural networks : analog VLSI implementation and learning algorithms

    Get PDF
    corecore