375 research outputs found

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation describes the use of cortical surface potentials, recorded with dense grids of microelectrodes, for brain-computer interfaces (BCIs). The work presented herein is an in-depth treatment of a broad and interdisciplinary topic, covering issues from electronics to electrodes, signals, and applications. Within the scope of this dissertation are several significant contributions. First, this work was the first to demonstrate that speech and arm movements could be decoded from surface local field potentials (LFPs) recorded in human subjects. Using surface LFPs recorded over face-motor cortex and Wernickes area, 150 trials comprising vocalized articulations of ten different words were classified on a trial-by-trial basis with 86% accuracy. Surface LFPs recorded over the hand and arm area of motor cortex were used to decode continuous hand movements, with correlation of 0.54 between the actual and predicted position over 70 seconds of movement. Second, this work is the first to make a detailed comparison of cortical field potentials recorded intracortically with microelectrodes and at the cortical surface with both micro- and macroelectrodes. Whereas coherence in macroelectrocorticography (ECoG) decayed to half its maximum at 5.1 mm separation in high frequencies, spatial constants of micro-ECoG signals were 530-700 ?m-much closer to the 110-160 ?m calculated for intracortical field potentials than to the macro-ECoG. These findings confirm that cortical surface potentials contain millimeter-scale dynamics. Moreover, these fine spatiotemporal features were important for the performance of speech and arm movement decoding. In addition to contributions in the areas of signals and applications, this dissertation includes a full characterization of the microelectrodes as well as collaborative work in which a custom, low-power microcontroller, with features optimized for biomedical implants, was taped out, fabricated in 65 nm CMOS technology, and tested. A new instruction was implemented in this microcontroller which reduced energy consumption when moving large amounts of data into memory by as much as 44%. This dissertation represents a comprehensive investigation of surface LFPs as an interfacing medium between man and machine. The nature of this work, in both the breadth of topics and depth of interdisciplinary effort, demonstrates an important and developing branch of engineering

    Design for Reliability and Low Power in Emerging Technologies

    Get PDF
    Die fortlaufende Verkleinerung von Transistor-Strukturgrößen ist einer der wichtigsten Antreiber für das Wachstum in der Halbleitertechnologiebranche. Seit Jahrzehnten erhöhen sich sowohl Integrationsdichte als auch Komplexität von Schaltkreisen und zeigen damit einen fortlaufenden Trend, der sich über alle modernen Fertigungsgrößen erstreckt. Bislang ging das Verkleinern von Transistoren mit einer Verringerung der Versorgungsspannung einher, was zu einer Reduktion der Leistungsaufnahme führte und damit eine gleichbleibenden Leistungsdichte sicherstellte. Doch mit dem Beginn von Strukturgrößen im Nanometerbreich verlangsamte sich die fortlaufende Skalierung. Viele Schwierigkeiten, sowie das Erreichen von physikalischen Grenzen in der Fertigung und Nicht-Idealitäten beim Skalieren der Versorgungsspannung, führten zu einer Zunahme der Leistungsdichte und, damit einhergehend, zu erschwerten Problemen bei der Sicherstellung der Zuverlässigkeit. Dazu zählen, unter anderem, Alterungseffekte in Transistoren sowie übermäßige Hitzeentwicklung, nicht zuletzt durch stärkeres Auftreten von Selbsterhitzungseffekten innerhalb der Transistoren. Damit solche Probleme die Zuverlässigkeit eines Schaltkreises nicht gefährden, werden die internen Signallaufzeiten üblicherweise sehr pessimistisch kalkuliert. Durch den so entstandenen zeitlichen Sicherheitsabstand wird die korrekte Funktionalität des Schaltkreises sichergestellt, allerdings auf Kosten der Performance. Alternativ kann die Zuverlässigkeit des Schaltkreises auch durch andere Techniken erhöht werden, wie zum Beispiel durch Null-Temperatur-Koeffizienten oder Approximate Computing. Wenngleich diese Techniken einen Großteil des üblichen zeitlichen Sicherheitsabstandes einsparen können, bergen sie dennoch weitere Konsequenzen und Kompromisse. Bleibende Herausforderungen bei der Skalierung von CMOS Technologien führen außerdem zu einem verstärkten Fokus auf vielversprechende Zukunftstechnologien. Ein Beispiel dafür ist der Negative Capacitance Field-Effect Transistor (NCFET), der eine beachtenswerte Leistungssteigerung gegenüber herkömmlichen FinFET Transistoren aufweist und diese in Zukunft ersetzen könnte. Des Weiteren setzen Entwickler von Schaltkreisen vermehrt auf komplexe, parallele Strukturen statt auf höhere Taktfrequenzen. Diese komplexen Modelle benötigen moderne Power-Management Techniken in allen Aspekten des Designs. Mit dem Auftreten von neuartigen Transistortechnologien (wie zum Beispiel NCFET) müssen diese Power-Management Techniken neu bewertet werden, da sich Abhängigkeiten und Verhältnismäßigkeiten ändern. Diese Arbeit präsentiert neue Herangehensweisen, sowohl zur Analyse als auch zur Modellierung der Zuverlässigkeit von Schaltkreisen, um zuvor genannte Herausforderungen auf mehreren Designebenen anzugehen. Diese Herangehensweisen unterteilen sich in konventionelle Techniken ((a), (b), (c) und (d)) und unkonventionelle Techniken ((e) und (f)), wie folgt: (a)\textbf{(a)} Analyse von Leistungszunahmen in Zusammenhang mit der Maximierung von Leistungseffizienz beim Betrieb nahe der Transistor Schwellspannung, insbesondere am optimalen Leistungspunkt. Das genaue Ermitteln eines solchen optimalen Leistungspunkts ist eine besondere Herausforderung bei Multicore Designs, da dieser sich mit den jeweiligen Optimierungszielsetzungen und der Arbeitsbelastung verschiebt. (b)\textbf{(b)} Aufzeigen versteckter Interdependenzen zwischen Alterungseffekten bei Transistoren und Schwankungen in der Versorgungsspannung durch „IR-drops“. Eine neuartige Technik wird vorgestellt, die sowohl Über- als auch Unterschätzungen bei der Ermittlung des zeitlichen Sicherheitsabstands vermeidet und folglich den kleinsten, dennoch ausreichenden Sicherheitsabstand ermittelt. (c)\textbf{(c)} Eindämmung von Alterungseffekten bei Transistoren durch „Graceful Approximation“, eine Technik zur Erhöhung der Taktfrequenz bei Bedarf. Der durch Alterungseffekte bedingte zeitlich Sicherheitsabstand wird durch Approximate Computing Techniken ersetzt. Des Weiteren wird Quantisierung verwendet um ausreichend Genauigkeit bei den Berechnungen zu gewährleisten. (d)\textbf{(d)} Eindämmung von temperaturabhängigen Verschlechterungen der Signallaufzeit durch den Betrieb nahe des Null-Temperatur Koeffizienten (N-ZTC). Der Betrieb bei N-ZTC minimiert temperaturbedingte Abweichungen der Performance und der Leistungsaufnahme. Qualitative und quantitative Vergleiche gegenüber dem traditionellen zeitlichen Sicherheitsabstand werden präsentiert. (e)\textbf{(e)} Modellierung von Power-Management Techniken für NCFET-basierte Prozessoren. Die NCFET Technologie hat einzigartige Eigenschaften, durch die herkömmliche Verfahren zur Spannungs- und Frequenzskalierungen zur Laufzeit (DVS/DVFS) suboptimale Ergebnisse erzielen. Dies erfordert NCFET-spezifische Power-Management Techniken, die in dieser Arbeit vorgestellt werden. (f)\textbf{(f)} Vorstellung eines neuartigen heterogenen Multicore Designs in NCFET Technologie. Das Design beinhaltet identische Kerne; Heterogenität entsteht durch die Anwendung der individuellen, optimalen Konfiguration der Kerne. Amdahls Gesetz wird erweitert, um neue system- und anwendungsspezifische Parameter abzudecken und die Vorzüge des neuen Designs aufzuzeigen. Die Auswertungen der vorgestellten Techniken werden mithilfe von Implementierungen und Simulationen auf Schaltkreisebene (gate-level) durchgeführt. Des Weiteren werden Simulatoren auf Systemebene (system-level) verwendet, um Multicore Designs zu implementieren und zu simulieren. Zur Validierung und Bewertung der Effektivität gegenüber dem Stand der Technik werden analytische, gate-level und system-level Simulationen herangezogen, die sowohl synthetische als auch reale Anwendungen betrachten

    Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates

    Get PDF
    Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”

    OPTIMIZATION OF TIME-RESPONSE AND AMPLIFICATION FEATURES OF EGOTs FOR NEUROPHYSIOLOGICAL APPLICATIONS

    Get PDF
    In device engineering, basic neuron-to-neuron communication has recently inspired the development of increasingly structured and efficient brain-mimicking setups in which the information flow can be processed with strategies resembling physiological ones. This is possible thanks to the use of organic neuromorphic devices, which can share the same electrolytic medium and adjust reciprocal connection weights according to temporal features of the input signals. In a parallel - although conceptually deeply interconnected - fashion, device engineers are directing their efforts towards novel tools to interface the brain and to decipher its signalling strategies. This led to several technological advances which allow scientists to transduce brain activity and, piece by piece, to create a detailed map of its functions. This effort extends over a wide spectrum of length-scales, zooming out from neuron-to-neuron communication up to global activity of neural populations. Both these scientific endeavours, namely mimicking neural communication and transducing brain activity, can benefit from the technology of Electrolyte-Gated Organic Transistors (EGOTs). Electrolyte-Gated Organic Transistors (EGOTs) are low-power electronic devices that functionally integrate the electrolytic environment through the exploitation of organic mixed ionic-electronic conductors. This enables the conversion of ionic signals into electronic ones, making such architectures ideal building blocks for neuroelectronics. This has driven extensive scientific and technological investigation on EGOTs. Such devices have been successfully demonstrated both as transducers and amplifiers of electrophysiological activity and as neuromorphic units. These promising results arise from the fact that EGOTs are active devices, which widely extend their applicability window over the capabilities of passive electronics (i.e. electrodes) but pose major integration hurdles. Being transistors, EGOTs need two driving voltages to be operated. If, on the one hand, the presence of two voltages becomes an advantage for the modulation of the device response (e.g. for devising EGOT-based neuromorphic circuitry), on the other hand it can become detrimental in brain interfaces, since it may result in a non-null bias directly applied on the brain. If such voltage exceeds the electrochemical stability window of water, undesired faradic reactions may lead to critical tissue and/or device damage. This work addresses EGOTs applications in neuroelectronics from the above-described dual perspective, spanning from neuromorphic device engineering to in vivo brain-device interfaces implementation. The advantages of using three-terminal architectures for neuromorphic devices, achieving reversible fine-tuning of their response plasticity, are highlighted. Jointly, the possibility of obtaining a multilevel memory unit by acting on the gate potential is discussed. Additionally, a novel mode of operation for EGOTs is introduced, enabling full retention of amplification capability while, at the same time, avoiding the application of a bias in the brain. Starting on these premises, a novel set of ultra-conformable active micro-epicortical arrays is presented, which fully integrate in situ fabricated EGOT recording sites onto medical-grade polyimide substrates. Finally, a whole organic circuitry for signal processing is presented, exploiting ad-hoc designed organic passive components coupled with EGOT devices. This unprecedented approach provides the possibility to sort complex signals into their constitutive frequency components in real time, thereby delineating innovative strategies to devise organic-based functional building-blocks for brain-machine interfaces.Nell’ingegneria elettronica, la comunicazione di base tra neuroni ha recentemente ispirato lo sviluppo di configurazioni sempre più articolate ed efficienti che imitano il cervello, in cui il flusso di informazioni può essere elaborato con strategie simili a quelle fisiologiche. Ciò è reso possibile grazie all'uso di dispositivi neuromorfici organici, che possono condividere lo stesso mezzo elettrolitico e regolare i pesi delle connessioni reciproche in base alle caratteristiche temporali dei segnali in ingresso. In modo parallelo, gli ingegneri elettronici stanno dirigendo i loro sforzi verso nuovi strumenti per interfacciare il cervello e decifrare le sue strategie di comunicazione. Si è giunti così a diversi progressi tecnologici che consentono agli scienziati di trasdurre l'attività cerebrale e, pezzo per pezzo, di creare una mappa dettagliata delle sue funzioni. Entrambi questi ambiti scientifici, ovvero imitare la comunicazione neurale e trasdurre l'attività cerebrale, possono trarre vantaggio dalla tecnologia dei transistor organici a base elettrolitica (EGOT). I transistor organici a base elettrolitica (EGOT) sono dispositivi elettronici a bassa potenza che integrano funzionalmente l'ambiente elettrolitico attraverso lo sfruttamento di conduttori organici misti ionici-elettronici, i quali consentono di convertire i segnali ionici in segnali elettronici, rendendo tali dispositivi ideali per la neuroelettronica. Gli EGOT sono stati dimostrati con successo sia come trasduttori e amplificatori dell'attività elettrofisiologica e sia come unità neuromorfiche. Tali risultati derivano dal fatto che gli EGOT sono dispositivi attivi, al contrario dell'elettronica passiva (ad esempio gli elettrodi), ma pongono comunque qualche ostacolo alla loro integrazione in ambiente biologico. In quanto transistor, gli EGOT necessitano l'applicazione di due tensioni tra i suoi terminali. Se, da un lato, la presenza di due tensioni diventa un vantaggio per la modulazione della risposta del dispositivo (ad esempio, per l'ideazione di circuiti neuromorfici basati su EGOT), dall'altro può diventare dannosa quando gli EGOT vengono adoperati come sito di registrazione nelle interfacce cerebrali, poiché una tensione non nulla può essere applicata direttamente al cervello. Se tale tensione supera la finestra di stabilità elettrochimica dell'acqua, reazioni faradiche indesiderate possono manifestarsi, le quali potrebbero danneggiare i tessuti e/o il dispositivo. Questo lavoro affronta le applicazioni degli EGOT nella neuroelettronica dalla duplice prospettiva sopra descritta: ingegnerizzazione neuromorfica ed implementazione come interfacce neurali in applicazioni in vivo. Vengono evidenziati i vantaggi dell'utilizzo di architetture a tre terminali per i dispositivi neuromorfici, ottenendo una regolazione reversibile della loro plasticità di risposta. Si discute inoltre la possibilità di ottenere un'unità di memoria multilivello agendo sul potenziale di gate. Viene introdotta una nuova modalità di funzionamento per gli EGOT, che consente di mantenere la capacità di amplificazione e, allo stesso tempo, di evitare l'applicazione di una tensione all’interfaccia cervello-dispositivo. Partendo da queste premesse, viene presentata una nuova serie di array micro-epicorticali ultra-conformabili, che integrano completamente i siti di registrazione EGOT fabbricati in situ su substrati di poliimmide. Infine, viene proposto un circuito organico per l'elaborazione del segnale, sfruttando componenti passivi organici progettati ad hoc e accoppiati a dispositivi EGOT. Questo approccio senza precedenti offre la possibilità di filtrare e scomporre segnali complessi nelle loro componenti di frequenza costitutive in tempo reale, delineando così strategie innovative per concepire blocchi funzionali a base organica per le interfacce cervello-macchina

    Doctor of Philosophy

    Get PDF
    dissertationWith the explosion of chip transistor counts, the semiconductor industry has struggled with ways to continue scaling computing performance in line with historical trends. In recent years, the de facto solution to utilize excess transistors has been to increase the size of the on-chip data cache, allowing fast access to an increased portion of main memory. These large caches allowed the continued scaling of single thread performance, which had not yet reached the limit of instruction level parallelism (ILP). As we approach the potential limits of parallelism within a single threaded application, new approaches such as chip multiprocessors (CMP) have become popular for scaling performance utilizing thread level parallelism (TLP). This dissertation identifies the operating system as a ubiquitous area where single threaded performance and multithreaded performance have often been ignored by computer architects. We propose that novel hardware and OS co-design has the potential to significantly improve current chip multiprocessor designs, enabling increased performance and improved power efficiency. We show that the operating system contributes a nontrivial overhead to even the most computationally intense workloads and that this OS contribution grows to a significant fraction of total instructions when executing several common applications found in the datacenter. We demonstrate that architectural improvements have had little to no effect on the performance of the OS over the last 15 years, leaving ample room for improvements. We specifically consider three potential solutions to improve OS execution on modern processors. First, we consider the potential of a separate operating system processor (OSP) operating concurrently with general purpose processors (GPP) in a chip multiprocessor organization, with several specialized structures acting as efficient conduits between these processors. Second, we consider the potential of segregating existing caching structures to decrease cache interference between the OS and application. Third, we propose that there are components within the OS itself that should be refactored to be both multithreaded and cache topology aware, which in turn, improves the performance and scalability of many-threaded applications

    Field-Effect Sensors

    Get PDF
    This Special Issue focuses on fundamental and applied research on different types of field-effect chemical sensors and biosensors. The topics include device concepts for field-effect sensors, their modeling, and theory as well as fabrication strategies. Field-effect sensors for biomedical analysis, food control, environmental monitoring, and the recording of neuronal and cell-based signals are discussed, among other factors
    • …
    corecore