824 research outputs found

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    Power System Transients: Impacts of Non-Ideal Sensors on Measurement-Based Applications

    Get PDF
    The power system is comprised of thousands of lines, generation sources, transformers, and other equipment responsible for servicing millions of customers. Such a complex apparatus requires constant monitoring and protection schemes capable of keeping the system operational, reliable, and resilient. To achieve these goals, measurement is a critical role in the continued functionality of the power system. However, measurement devices are never completely reliable, and are susceptible to inherent irregularities; imparting potentially misleading distortions on measurements containing high-frequency components. This dissertation analyzes some of these effects, as well as the way they may impact certain applications in the grid that utilize these kinds of measurements. This dissertation first presents background on existing measurement technologies currently in use in the power grid, with extra emphasis placed on point-on-wave (PoW) sensors, those designed to capture oscillographic records of voltage and current signals. Next, a waveform “playback” system, developed at Oak Ridge National Laboratory’s Distributed Energy Communications \& Control (DECC) laboratory was used for comparisons between various line-post-monitor PoW sensors when subjected to different high-frequency current disturbances. Each of the three sensors exhibited unique quirks in these spectral regions, both in terms of harmonic magnitude and phase angle. A goodness-of-fit metric for comparing an ideal reference sensor with the test sensors was adopted from the literature and showed the extremes to which two test sensors vastly under performed when compared to the third. The subsequent chapter analyzes these behaviors under a statistical lens, using kernel density estimation to fit probability density functions (PDFs) to error distributions at specific harmonic frequencies resulting from sensor frequency response distortions. The remaining two chapters of the dissertation are concerned with resultant effects on applications that require high-frequency transient data. First, a detection algorithm is presented, and its performance when subjected to statistical errors inherent in these sensors is quantified. The dissertation culminates with a study on an artificial intelligence (AI) technique for estimating the location of capacitor switching transients, as well as learning prediction intervals that indicate the level of uncertainty present in the data caused by sensor frequency response irregularities

    Scalable Hardware Efficient Deep Spatio-Temporal Inference Networks

    Get PDF
    Deep machine learning (DML) is a promising field of research that has enjoyed much success in recent years. Two of the predominant deep learning architectures studied in the literature are Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs). Both have been successfully applied to many standard benchmarks with a primary focus on machine vision and speech processing domains. Many real-world applications involve time-varying signals and, consequently, necessitate models that efficiently represent both temporal and spatial attributes. However, neither DBNs nor CNNs are designed to naturally capture temporal dependencies in observed data, often resulting in the inadequate transformation of spatio-temporal signals into wide spatial structures. It is argued that deep machine learning without proper temporal representation mechanisms is unable to extract meaningful information from many time-varying natural signals. Another clear emerging need is in growing deep learning architectures with the size of the problem at hand, suggesting that such architectures should map well to custom hardware platforms. The latter offer much better performance than that achievable using CPUs or even GPUs. Analog computation is a unique potential solution to the scalability challenge offering the benefits of low power consumption and smaller physical size when compared to digital implementations. However, these benefits come with the consequence of inaccurate computations and noise. This work presents an enhanced formulation of DeSTIN - a Deep Spatio-Temporal Inference Network (DeSTIN) that is inherently designed to capture both spatial and temporal dependencies in the data provided. The regular structure of DeSTIN, its computational requirements, and local connectivity render it hardware-efficient and highly scalable. Implementation of DeSTIN using analog computation is studied in detail, where the architectural robustness to various distortions in its signals is demonstrated. To the best of our knowledge, this is the first time custom analog hardware has been developed for deep machine learning. Key enhancements to previous formulations of DeSTIN are discussed in detail and results on standard benchmarks are presented. This work helps pave the way for advancing deep learning to address some of the long-standing challenges in machine learning

    Bendit_I/O: A System for Extending Mediated and Networked Performance Techniques to Circuit-Bent Devices

    Get PDF
    Circuit bending—the act of modifying a consumer device\u27s internal circuitry in search of new, previously-unintended responses—provides artists with a chance to subvert expectations for how a certain piece of hardware should be utilized, asking them to view everyday objects as complex electronic instruments. Along with the ability to create avant-garde instruments from unique and nostalgic sound sources, the practice of circuit bending serves as a methodology for exploring the histories of discarded objects through activism, democratization, and creative resurrection. While a rich history of circuit bending continues to inspire artists today, the recent advent of smart musical instruments and the growing number of hybrid tools available for creating connective musical experiences through networks asks us to reconsider the ways in which repurposed devices can continue to play a role in modern sonic art. Bendit_I/O serves as a synthesis of the technologies and aesthetics of the circuit bending and Networked Musical Performance (NMP) practices. The framework extends techniques native to the practices of telematic and network art to hacked hardware so that artists can design collaborative and mediated experiences that incorporate old devices into new realities. Consisting of user-friendly hardware and software components, Bendit_I/O aims to be an entry point for novice artists into both of the creative realms it brings together. This document presents details on the components of the Bendit_I/O framework along with an analysis of their use in three new compositions. Additional research serves to place the framework in historical context through literature reviews of previous work undertaken in the circuit bending and networked musical performance practices. Additionally, a case is made for performing hacked consumer hardware across a wireless network, emphasizing how extensions to current circuit bending and NMP practices provide the ability to probe our relationships with hardware through collaborative, mediated, and multimodal methods

    Angles and devices for quantum approximate optimization

    Get PDF
    A potential application of emerging Noisy Intermediate-Scale Quantum (NISQ) devices is that of approximately solving combinatorial optimization problems. This thesis investigates a gate-based algorithm for this purpose, the Quantum Approximate Optimization Algorithm (QAOA), in two major themes. First, we examine how the QAOA resolves the problems it is designed to solve. We take a statistical view of the algorithm applied to ensembles of problems, first, considering a highly symmetric version of the algorithm, using Grover drivers. In this highly symmetric context, we find a simple dependence of the QAOA state’s expected value on how values of the cost function are distributed. Furthering this theme, we demonstrate that, generally, QAOA performance depends on problem statistics with respect to a metric induced by a chosen driver Hamiltonian. We obtain a method for evaluating QAOA performance on worst-case problems, those of random costs, for differing driver choices. Second, we investigate a QAOA context with device control occurring only via single-qubit gates, rather than using individually programmable one- and two-qubit gates. In this reduced control overhead scheme---the digital-analog scheme---the complexity of devices running QAOA circuits is decreased at the cost of errors which are shown to be non-harmful in certain regimes. We then explore hypothetical device designs one could use for this purpose.Eine mögliche Anwendung für “Noisy Intermediate-Scale Quantum devices” (NISQ devices) ist die näherungsweise Lösung von kombinatorischen Optimierungsproblemen. Die vorliegende Arbeit untersucht anhand zweier Hauptthemen einen gatterbasierten Algorithmus, den sogenannten “Quantum Approximate Optimization Algorithm” (QAOA). Zuerst prüfen wir, wie der QAOA jene Probleme löst, für die er entwickelt wurde. Wir betrachten den Algorithmus in einer Kombination mit hochsymmetrischen Grover-Treibern für statistische Ensembles von Probleminstanzen. In diesem Kontext finden wir eine einfache Abhängigkeit von der Verteilung der Kostenfunktionswerte. Weiterführend zeigen wir, dass die QAOA-Leistung generell von der Problemstatistik in Bezug auf eine durch den gewählten Treiber-Hamiltonian induzierte Metrik abhängt. Wir erhalten eine Methode zur Bewertung der QAOA-Leistung bei schwersten Problemen (solche zufälliger Kosten) für unterschiedliche Treiberauswahlen. Zweitens untersuchen wir eine QAOA-Variante, bei der sich die Hardware- Kontrolle nur auf Ein-Qubit-Gatter anstatt individuell programmierbare Ein- und Zwei-Qubit-Gatter erstreckt. In diesem reduzierten Kontrollaufwandsschema—dem digital-analogen Schema—sinkt die Komplexität der Hardware, welche die QAOASchaltungen ausführt, auf Kosten von Fehlern, die in bestimmten Bereichen als ungefährlich nachgewiesen werden. Danach erkunden wir hypothetische Hardware- Konzepte, die für diesen Zweck genutzt werden könnten

    Automatic Pain Assessment by Learning from Multiple Biopotentials

    Get PDF
    Kivun täsmällinen arviointi on tärkeää kivunhallinnassa, erityisesti sairaan- hoitoa vaativille ipupotilaille. Kipu on subjektiivista, sillä se ei ole pelkästään aistituntemus, vaan siihen saattaa liittyä myös tunnekokemuksia. Tällöin itsearviointiin perustuvat kipuasteikot ovat tärkein työkalu, niin auan kun potilas pystyy kokemuksensa arvioimaan. Arviointi on kuitenkin haasteellista potilailla, jotka eivät itse pysty kertomaan kivustaan. Kliinisessä hoito- työssä kipua pyritään objektiivisesti arvioimaan esimerkiksi havainnoimalla fysiologisia muuttujia kuten sykettä ja käyttäytymistä esimerkiksi potilaan kasvonilmeiden perusteella. Tutkimuksen päätavoitteena on automatisoida arviointiprosessi hyödyntämällä koneoppimismenetelmiä yhdessä biosignaalien prosessointnin kanssa. Tavoitteen saavuttamiseksi mitattiin autonomista keskushermoston toimintaa kuvastavia biopotentiaaleja: sydänsähkökäyrää, galvaanista ihoreaktiota ja kasvolihasliikkeitä mittaavaa lihassähkökäyrää. Mittaukset tehtiin terveillä vapaaehtoisilla, joille aiheutettiin kokeellista kipuärsykettä. Järestelmän kehittämiseen tarvittavaa tietokantaa varten rakennettiin biopotentiaaleja keräävä Internet of Things -pohjainen tallennusjärjestelmä. Koostetun tietokannan avulla kehitettiin biosignaaleille prosessointimenetelmä jatku- vaan kivun arviointiin. Signaaleista eroteltiin piirteitä sekuntitasoon mukautetuilla aikaikkunoilla. Piirteet visualisoitiin ja tarkasteltiin eri luokittelijoilla kivun ja kiputason tunnistamiseksi. Parhailla luokittelumenetelmillä saavutettiin kivuntunnistukseen 90% herkkyyskyky (sensitivity) ja 84% erottelukyky (specificity) ja kivun voimakkuuden arviointiin 62,5% tarkkuus (accuracy). Tulokset vahvistavat kyseisen käsittelytavan käyttökelpoisuuden erityis- esti tunnistettaessa kipua yksittäisessä arviointi-ikkunassa. Tutkimus vahvistaa biopotentiaalien avulla kehitettävän automatisoidun kivun arvioinnin toteutettavuuden kokeellisella kivulla, rohkaisten etenemään todellisen kivun tutkimiseen samoilla menetelmillä. Menetelmää kehitettäessä suoritettiin lisäksi vertailua ja yhteenvetoa automaattiseen kivuntunnistukseen kehitettyjen eri tutkimusten välisistä samankaltaisuuksista ja eroista. Tarkastelussa löytyi signaalien eroavaisuuksien lisäksi tutkimusmuotojen aiheuttamaa eroa arviointitavoitteisiin, mikä hankaloitti tutkimusten vertailua. Lisäksi pohdit- tiin mitkä perinteisten prosessointitapojen osiot rajoittavat tai edistävät ennustekykyä ja miten, sekä tuoko optimointi läpimurtoa järjestelmän näkökulmasta.Accurate pain assessment plays an important role in proper pain management, especially among hospitalized people experience acute pain. Pain is subjective in nature which is not only a sensory feeling but could also combine affective factors. Therefore self-report pain scales are the main assessment tools as long as patients are able to self-report. However, it remains a challenge to assess the pain from the patients who cannot self-report. In clinical practice, physiological parameters like heart rate and pain behaviors including facial expressions are observed as empirical references to infer pain objectively. The main aim of this study is to automate such process by leveraging machine learning methods and biosignal processing. To achieve this goal, biopotentials reflecting autonomic nervous system activities including electrocardiogram and galvanic skin response, and facial expressions measured with facial electromyograms were recorded from healthy volunteers undergoing experimental pain stimulus. IoT-enabled biopotential acquisition systems were developed to build the database aiming at providing compact and wearable solutions. Using the database, a biosignal processing flow was developed for continuous pain estimation. Signal features were extracted with customized time window lengths and updated every second. The extracted features were visualized and fed into multiple classifiers trained to estimate the presence of pain and pain intensity separately. Among the tested classifiers, the best pain presence estimating sensitivity achieved was 90% (specificity 84%) and the best pain intensity estimation accuracy achieved was 62.5%. The results show the validity of the proposed processing flow, especially in pain presence estimation at window level. This study adds one more piece of evidence on the feasibility of developing an automatic pain assessment tool from biopotentials, thus providing the confidence to move forward to real pain cases. In addition to the method development, the similarities and differences between automatic pain assessment studies were compared and summarized. It was found that in addition to the diversity of signals, the estimation goals also differed as a result of different study designs which made cross dataset comparison challenging. We also tried to discuss which parts in the classical processing flow would limit or boost the prediction performance and whether optimization can bring a breakthrough from the system’s perspective

    Energy-Efficient Circuit Designs for Miniaturized Internet of Things and Wireless Neural Recording

    Full text link
    Internet of Things (IoT) have become omnipresent over various territories including healthcare, smart building, agriculture, and environmental and industrial monitoring. Today, IoT are getting miniaturized, but at the same time, they are becoming more intelligent along with the explosive growth of machine learning. Not only do IoT sense and collect data and communicate, but they also edge-compute and extract useful information within the small form factor. A main challenge of such miniaturized and intelligent IoT is to operate continuously for long lifetime within its low battery capacity. Energy efficiency of circuits and systems is key to addressing this challenge. This dissertation presents two different energy-efficient circuit designs: a 224pW 260ppm/°C gate-leakage-based timer for wireless sensor nodes (WSNs) for the IoT and an energy-efficient all analog machine learning accelerator with 1.2 µJ/inference of energy consumption for the CIFAR-10 and SVHN datasets. Wireless neural interface is another area that demands miniaturized and energy-efficient circuits and systems for safe long-term monitoring of brain activity. Historically, implantable systems have used wires for data communication and power, increasing risks of tissue damage. Therefore, it has been a long-standing goal to distribute sub-mm-scale true floating and wireless implants throughout the brain and to record single-neuron-level activities. This dissertation presents a 0.19×0.17mm2 0.74µW wireless neural recording IC with near-infrared (NIR) power and data telemetry and a 0.19×0.28mm2 0.57µW light tolerant wireless neural recording IC.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169712/1/jongyup_1.pd
    corecore