22 research outputs found

    Computer arithmetic based on the Continuous Valued Number System

    Get PDF

    A Prototype CVNS Distributed Neural Network

    Get PDF
    Artificial neural networks are widely used in many applications such as signal processing, classification, and control. However, the practical implementation of them is challenged by the number of inputs, storing the weights, and realizing the activation function.In this work, Continuous Valued Number System (CVNS) distributed neural networks are proposed which are providing the network with self-scaling property. This property aids the network to cope spontaneously with different number of inputs. The proposed CVNS DNN can change the dynamic range of the activation function spontaneously according to the number of inputs providing a proper functionality for the network.In addition, multi-valued CVNS DRAMs are proposed to store the weights as CVNS digits. These memories scan store up to 16 levels, equal to 4 bits, on each storage cell. In addition, they use error correction codes to detect and correct the error over the stored values.A synapse-neuron module is proposed to decrease the design cost. It contains both synapse and neuron and the relevant components. In these modules, the activation function is realized through analog circuits which are far more compact compared to the digital look-up-tables while quite accurate.Furthermore, the redundancy between CVNS digits together with the distributed structure of the neuron make the proposal stable against process violations and reduce the noise to signal ration

    Mixed-Signal Neural Network Implementation with Programmable Neuron

    Get PDF
    This thesis introduces implementation of mixed-signal building blocks of an artificial neural network; namely the neuron and the synaptic multiplier. This thesis, also, investigates the nonlinear dynamic behavior of a single artificial neuron and presents a Distributed Arithmetic (DA)-based Finite Impulse Response (FIR) filter. All the introduced structures are designed and custom laid out

    A Mixed-Signal Feed-Forward Neural Network Architecture Using A High-Resolution Multiplying D/A Conversion Method

    Get PDF
    Artificial Neural Networks (ANNs) are parallel processors capable of learning from a set of sample data using a specific learning rule. Such systems are commonly used in applications where human brain may surpass conventional computers such as image processing, speech/character recognition, intelligent control and robotics to name a few. In this thesis, a mixed-signal neural network architecture is proposed employs a high resolution Multiplying Digital to Analog Converter (MDAC) designed using Delta Sigma Modulation (DSM). To reduce chip are, multiplexing is used in addition to analog implementation of arithmetic operations. This work employs a new method for filtering the high bit-rate signals using neurons nonlinear transfer function already existing in the network. Therefore, a configuration of a few MOS transistors are replacing the large resistors required to implement the low-pass filter in the network. This configuration noticeably decreases the chip area and also makes multiplexing feasible for hardware implementation

    Mixed-Signal VLSI Implementation of CVNS Artificial Neural Networks

    Get PDF
    In this work, mixed-signal implementation of Continuous Valued Number System (CVNS) neural network is proposed. The proposed network resolves the limited signal processing precision issue present in mixed-signal neural networks. This is realized by the CVNS addition, the CVNS multiplication and the CVNS sigmoid function evaluation algorithms proposed in this dissertation. The proposed algorithms provide accurate results in low-resolution environment. In addition, an area-efficient low sensitivity CVNS Madaline is proposed. The proposed Madaline is more robust to input and weight errors when compared to the previously developed structures. Moreover, its area consumption is lower. Furthermore, a new approximation scheme for hyperbolic tangent activation function is proposed. Using the proposed approximation scheme results in efficient implementation of digital ASIC neural networks in terms of area, delay and power consumption

    Microelectronic cmos implementation of a machine learning technique for sensor calibration

    Get PDF
    An integrated machine-learning based adaptive circuit for sensor calibration implemented in standard 0.18μm CMOS technology with 1.8V power supply is presented in this paper. In addition to linearizing the device response, the proposed system is also capable to correct offset and gain errors. The building blocks conforming the adaptive system are designed and experimentally characterized to generate numerical high-level models which are used to verify the proper performance of each analog block within a defined multilayer perceptron architecture. The network weights, obtained from the learning phase, are stored in a microcontroller EEPROM memory, and then loaded into each of the registers of the proposed integrated prototype. In order to verify the proposed system performance, the non-linear characteristic of a thermistor is compensated as an application example, achieving a relative error er below 3% within an input span of 130°C, which is almost 6 times less than the uncorrected response. The power consumption of the whole system is 1.4mW and it has an active area of 0.86mm 2 . The digital programmability of the network weights provides flexibility when a sensor change is required

    Deep Liquid State Machines with Neural Plasticity and On-Device Learning

    Get PDF
    The Liquid State Machine (LSM) is a recurrent spiking neural network designed for efficient processing of spatio-temporal streams of information. LSMs have several inbuilt features such as robustness, fast training and inference speed, generalizability, continual learning (no catastrophic forgetting), and energy efficiency. These features make LSM’s an ideal network for deploying intelligence on-device. In general, single LSMs are unable to solve complex real-world tasks. Recent literature has shown emergence of hierarchical architectures to support temporal information processing over different time scales. However, these approaches do not typically investigate the optimum topology for communication between layers in the hierarchical network, or assume prior knowledge about the target problem and are not generalizable. In this thesis, a deep Liquid State Machine (deep-LSM) network architecture is proposed. The deep-LSM uses staggered reservoirs to process temporal information on multiple timescales. A key feature of this network is that neural plasticity and attention are embedded in the topology to bolster its performance for complex spatio-temporal tasks. An advantage of the deep-LSM is that it exploits the random projection native to the LSM as well as local plasticity mechanisms to optimize the data transfer between sequential layers. Both random projections and local plasticity mechanisms are ideal for on-device learning due to their low computational complexity and the absence of backpropagating error. The deep-LSM is deployed on a custom learning architecture with memristors to study the feasibility of on-device learning. The performance of the deep-LSM is demonstrated on speech recognition and seizure detection applications

    A Study of Techniques and Mechanisms of Vagus Nerve Stimulation for Treatment of Inflammation

    Get PDF
    Vagus nerve stimulation (VNS) has been on the forefront of inflammatory disorder research for the better part of the last three decades and has yielded many promising results. There remains, however, much debate about the actual biological mechanisms of such treatments, as well as, questions about inconsistencies in methods used in many research efforts. In this work, I identify shortcomings in past VNS methods and submit new developments and findings that can progress the research community towards more selective and relevant VNS research and treatments. In Aim 1, I present the most recent advancements in the capabilities of our fully implantable Bionode stimulation device platform for use in VNS studies to include stimulation circuitry, device packaging, and stimulation cuff design. In Aim 2, I characterize the inflammatory cytokine response of rats to intraperitoneally injected endotoxin utilizing new data analysis methods and demonstrate the modulatory effects of VNS applied by the Bionode stimulator to subdiaphragmatic branches of the left vagus nerve in an acute study. In Aim 3, using fully implanted Bionode devices, I expose a previously unidentified effect of chronically cuffing the left cervical vagus nerve to suppress efferent Fluorogold transport and cause unintended attenuation to physiological effects of VNS. Finally, in accordance with our findings from Aims 1, 2, and 3, I present results from new and promising techniques we have explored for future use of VNS in inflammation studies

    Radio Continuum Surveys with Square Kilometre Array Pathfinders

    Get PDF
    In the lead-up to the Square Kilometre Array (SKA) project, several next-generation radio telescopes and upgrades are already being built around the world. These include APERTIF (The Netherlands), ASKAP (Australia), e-MERLIN (UK), VLA (USA), e-EVN (based in Europe), LOFAR (The Netherlands), MeerKAT (South Africa), and the Murchison Widefield Array. Each of these new instruments has different strengths, and coordination of surveys between them can help maximise the science from each of them. A radio continuum survey is being planned on each of them with the primary science objective of understanding the formation and evolution of galaxies over cosmic time, and the cosmological parameters and large-scale structures which drive it. In pursuit of this objective, the different teams are developing a variety of new techniques, and refining existing ones. To achieve these exciting scientific goals, many technical challenges must be addressed by the survey instruments. Given the limited resources of the global radio-astronomical community, it is essential that we pool our skills and knowledge. We do not have sufficient resources to enjoy the luxury of re-inventing wheels. We face significant challenges in calibration, imaging, source extraction and measurement, classification and cross-identification, redshift determination, stacking, and data-intensive research. As these instruments extend the observational parameters, we will face further unexpected challenges in calibration, imaging, and interpretation. If we are to realise the full scientific potential of these expensive instruments, it is essential that we devote enough resources and careful study to understanding the instrumental effects and how they will affect the data. We have established an SKA Radio Continuum Survey working group, whose prime role is to maximise science from these instruments by ensuring we share resources and expertise across the projects. Here we describe these projects, their science goals, and the technical challenges which are being addressed to maximise the science return
    corecore