52 research outputs found

    Nonlinear Dynamics of Neural Circuits

    Get PDF

    Spiking Neural Networks and Ferroelectric Based Devices for Neuromorphic Computing

    Get PDF
    Thanks to the rise of IoT and wearable electronics, smart sensors and low-power edge systems are becoming increasingly prevalent in our daily lives. In particular, the pursuit of more interactive and intelligent systems pushed research and industry toward the integration of non-conventional algorithms into electronic appliances. Consequently, AI and deep learning, have been proposed as a solution to a multitude of algorithmic-difficult problems, such as facial and speech recognition, sentiment analysis, text synthesis, autonomous driving, etc. However, edge devices, due to their limited dimensions and serious energy constraints, present some limitations for the elaboration of deep neural networks. Hence, a popular strategy is to send the raw information acquired by the low-power devices to the cloud and wait for a processed response, instead of performing the computation locally. However, this approach increases the computational overhead of cloud servers, it leads to a quite long response latency and, moreover, it fails when the internet connection is not available. An alternative solution consists in moving computational capabilities directly to the edge through a distributed computing network, which can locally process data directly on its nodes. For edge computing, however, the energy budget required by DNNs running on conventional elaboration systems is problematic. Therefore, because a large fraction of the energy is dissipated by moving data back and forth from the memory, significant efforts are being devoted to overcome the so-called von-Neumann bottleneck. In such a context, neuromorphic computing has been developed to improve AI energy requirements by exploiting biologically-inspired neural networks. This new AI branch exploits VLSI analog circuits to implement SNNs in hardware, thus closely mimicking the biological power reduction strategies. This thesis aims to investigate and model neuromorphic solutions for more energy-efficient AI applications. In particular, we observed in deep SNNs that the average spike rate tends to increase with the number of layers, leading to a decreased energy efficiency during both the learning and the inference phases. In order to contrast this behavior, measures must be taken to control the spike rate without inducing a large number of silent neurons, which do not emit spikes and do not contribute to the training process. Therefore, we present a 2-phase training strategy for deep feed-forward SNNs: our approach modifies the loss functions and introduces two phases of training to reduce the spike rate and address the silent neuron issue. Moreover, we also examined the most important circuital implementations of SNNs and neuromorphic platforms to understand the challenges and the current state of the art of this topic. Then, this thesis delves into the design techniques for ferroelectric-based memristors and their applications in neuromorphic devices, specifically focusing on FTJs as promising devices for implementing synaptic-like capabilities. In particular, we merged a model for the polarization dynamics in MFIM structures, with a novel charge-trapping model, in order to investigate the relationship between ferroelectric polarization and charge trapping in the dielectric stack. Our simulation results, calibrated against experiments, present evidence that the partial compensation of the ferroelectric polarization due to trapped charges strongly influences the operation of HZO based FTJs. The red thread linking the activities on the training of SNNs to those on FTJs based devices is the improvement of the energy efficiency in neuromorphic systems.Thanks to the rise of IoT and wearable electronics, smart sensors and low-power edge systems are becoming increasingly prevalent in our daily lives. In particular, the pursuit of more interactive and intelligent systems pushed research and industry toward the integration of non-conventional algorithms into electronic appliances. Consequently, AI and deep learning, have been proposed as a solution to a multitude of algorithmic-difficult problems, such as facial and speech recognition, sentiment analysis, text synthesis, autonomous driving, etc. However, edge devices, due to their limited dimensions and serious energy constraints, present some limitations for the elaboration of deep neural networks. Hence, a popular strategy is to send the raw information acquired by the low-power devices to the cloud and wait for a processed response, instead of performing the computation locally. However, this approach increases the computational overhead of cloud servers, it leads to a quite long response latency and, moreover, it fails when the internet connection is not available. An alternative solution consists in moving computational capabilities directly to the edge through a distributed computing network, which can locally process data directly on its nodes. For edge computing, however, the energy budget required by DNNs running on conventional elaboration systems is problematic. Therefore, because a large fraction of the energy is dissipated by moving data back and forth from the memory, significant efforts are being devoted to overcome the so-called von-Neumann bottleneck. In such a context, neuromorphic computing has been developed to improve AI energy requirements by exploiting biologically-inspired neural networks. This new AI branch exploits VLSI analog circuits to implement SNNs in hardware, thus closely mimicking the biological power reduction strategies. This thesis aims to investigate and model neuromorphic solutions for more energy-efficient AI applications. In particular, we observed in deep SNNs that the average spike rate tends to increase with the number of layers, leading to a decreased energy efficiency during both the learning and the inference phases. In order to contrast this behavior, measures must be taken to control the spike rate without inducing a large number of silent neurons, which do not emit spikes and do not contribute to the training process. Therefore, we present a 2-phase training strategy for deep feed-forward SNNs: our approach modifies the loss functions and introduces two phases of training to reduce the spike rate and address the silent neuron issue. Moreover, we also examined the most important circuital implementations of SNNs and neuromorphic platforms to understand the challenges and the current state of the art of this topic. Then, this thesis delves into the design techniques for ferroelectric-based memristors and their applications in neuromorphic devices, specifically focusing on FTJs as promising devices for implementing synaptic-like capabilities. In particular, we merged a model for the polarization dynamics in MFIM structures, with a novel charge-trapping model, in order to investigate the relationship between ferroelectric polarization and charge trapping in the dielectric stack. Our simulation results, calibrated against experiments, present evidence that the partial compensation of the ferroelectric polarization due to trapped charges strongly influences the operation of HZO based FTJs. The red thread linking the activities on the training of SNNs to those on FTJs based devices is the improvement of the energy efficiency in neuromorphic systems

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Synaptic Learning for Neuromorphic Vision - Processing Address Events with Spiking Neural Networks

    Get PDF
    Das Gehirn übertrifft herkömmliche Computerarchitekturen in Bezug auf Energieeffizienz, Robustheit und Anpassungsfähigkeit. Diese Aspekte sind auch für neue Technologien wichtig. Es lohnt sich daher, zu untersuchen, welche biologischen Prozesse das Gehirn zu Berechnungen befähigen und wie sie in Silizium umgesetzt werden können. Um sich davon inspirieren zu lassen, wie das Gehirn Berechnungen durchführt, ist ein Paradigmenwechsel im Vergleich zu herkömmlichen Computerarchitekturen erforderlich. Tatsächlich besteht das Gehirn aus Nervenzellen, Neuronen genannt, die über Synapsen miteinander verbunden sind und selbstorganisierte Netzwerke bilden. Neuronen und Synapsen sind komplexe dynamische Systeme, die durch biochemische und elektrische Reaktionen gesteuert werden. Infolgedessen können sie ihre Berechnungen nur auf lokale Informationen stützen. Zusätzlich kommunizieren Neuronen untereinander mit kurzen elektrischen Impulsen, den so genannten Spikes, die sich über Synapsen bewegen. Computational Neuroscientists versuchen, diese Berechnungen mit spikenden neuronalen Netzen zu modellieren. Wenn sie auf dedizierter neuromorpher Hardware implementiert werden, können spikende neuronale Netze wie das Gehirn schnelle, energieeffiziente Berechnungen durchführen. Bis vor kurzem waren die Vorteile dieser Technologie aufgrund des Mangels an funktionellen Methoden zur Programmierung von spikenden neuronalen Netzen begrenzt. Lernen ist ein Paradigma für die Programmierung von spikenden neuronalen Netzen, bei dem sich Neuronen selbst zu funktionalen Netzen organisieren. Wie im Gehirn basiert das Lernen in neuromorpher Hardware auf synaptischer Plastizität. Synaptische Plastizitätsregeln charakterisieren Gewichtsaktualisierungen im Hinblick auf Informationen, die lokal an der Synapse anliegen. Das Lernen geschieht also kontinuierlich und online, während sensorischer Input in das Netzwerk gestreamt wird. Herkömmliche tiefe neuronale Netze werden üblicherweise durch Gradientenabstieg trainiert. Die durch die biologische Lerndynamik auferlegten Einschränkungen verhindern jedoch die Verwendung der konventionellen Backpropagation zur Berechnung der Gradienten. Beispielsweise behindern kontinuierliche Aktualisierungen den synchronen Wechsel zwischen Vorwärts- und Rückwärtsphasen. Darüber hinaus verhindern Gedächtnisbeschränkungen, dass die Geschichte der neuronalen Aktivität im Neuron gespeichert wird, so dass Verfahren wie Backpropagation-Through-Time nicht möglich sind. Neuartige Lösungen für diese Probleme wurden von Computational Neuroscientists innerhalb des Zeitrahmens dieser Arbeit vorgeschlagen. In dieser Arbeit werden spikende neuronaler Netzwerke entwickelt, um Aufgaben der visuomotorischen Neurorobotik zu lösen. In der Tat entwickelten sich biologische neuronale Netze ursprünglich zur Steuerung des Körpers. Die Robotik stellt also den künstlichen Körper für das künstliche Gehirn zur Verfügung. Auf der einen Seite trägt diese Arbeit zu den gegenwärtigen Bemühungen um das Verständnis des Gehirns bei, indem sie schwierige Closed-Loop-Benchmarks liefert, ähnlich dem, was dem biologischen Gehirn widerfährt. Auf der anderen Seite werden neue Wege zur Lösung traditioneller Robotik Probleme vorgestellt, die auf vom Gehirn inspirierten Paradigmen basieren. Die Forschung wird in zwei Schritten durchgeführt. Zunächst werden vielversprechende synaptische Plastizitätsregeln identifiziert und mit ereignisbasierten Vision-Benchmarks aus der realen Welt verglichen. Zweitens werden neuartige Methoden zur Abbildung visueller Repräsentationen auf motorische Befehle vorgestellt. Neuromorphe visuelle Sensoren stellen einen wichtigen Schritt auf dem Weg zu hirninspirierten Paradigmen dar. Im Gegensatz zu herkömmlichen Kameras senden diese Sensoren Adressereignisse aus, die lokalen Änderungen der Lichtintensität entsprechen. Das ereignisbasierte Paradigma ermöglicht eine energieeffiziente und schnelle Bildverarbeitung, erfordert aber die Ableitung neuer asynchroner Algorithmen. Spikende neuronale Netze stellen eine Untergruppe von asynchronen Algorithmen dar, die vom Gehirn inspiriert und für neuromorphe Hardwaretechnologie geeignet sind. In enger Zusammenarbeit mit Computational Neuroscientists werden erfolgreiche Methoden zum Erlernen räumlich-zeitlicher Abstraktionen aus der Adressereignisdarstellung berichtet. Es wird gezeigt, dass Top-Down-Regeln der synaptischen Plastizität, die zur Optimierung einer objektiven Funktion abgeleitet wurden, die Bottom-Up-Regeln übertreffen, die allein auf Beobachtungen im Gehirn basieren. Mit dieser Einsicht wird eine neue synaptische Plastizitätsregel namens "Deep Continuous Local Learning" eingeführt, die derzeit den neuesten Stand der Technik bei ereignisbasierten Vision-Benchmarks erreicht. Diese Regel wurde während eines Aufenthalts an der Universität von Kalifornien, Irvine, gemeinsam abgeleitet, implementiert und evaluiert. Im zweiten Teil dieser Arbeit wird der visuomotorische Kreis geschlossen, indem die gelernten visuellen Repräsentationen auf motorische Befehle abgebildet werden. Drei Ansätze werden diskutiert, um ein visuomotorisches Mapping zu erhalten: manuelle Kopplung, Belohnungs-Kopplung und Minimierung des Vorhersagefehlers. Es wird gezeigt, wie diese Ansätze, welche als synaptische Plastizitätsregeln implementiert sind, verwendet werden können, um einfache Strategien und Bewegungen zu lernen. Diese Arbeit ebnet den Weg zur Integration von hirninspirierten Berechnungsparadigmen in das Gebiet der Robotik. Es wird sogar prognostiziert, dass Fortschritte in den neuromorphen Technologien und bei den Plastizitätsregeln die Entwicklung von Hochleistungs-Lernrobotern mit geringem Energieverbrauch ermöglicht

    Towards a better understanding of the precordial leads : an engineering point of view

    Get PDF
    This thesis provides comprehensive literature review of the electrocardiography evolution to highlight the important theories behind the development of the electrocardiography device. More importantly, it discusses different electrode placement on the chest, and their clinical advantages. This work presents a technical detail of a new ECG device which was developed at MARCS institute and can record the Wilson Central Terminal (WCT) components in addition to the standard 12-lead ECG. This ECG device was used to record from 147 patients at Campbelltown hospital over three years. The first two years of recording contain 92 patients which was published in the Physionet platform under the name of Wilson Central Terminal ECG database (WCTECGdb). This novel dataset was used to demonstrate the WCT signal characterisation and investigate how WCT impacts the precordial leads. Furthermore, the clinical influence of the WCT on precordial leads in patients diagnosed with non-ST segment elevation myocardial infarction (NSTEMI) is discussed. The work presented in this research is intended to revisit some of the ECG theories and investigate the validity of them using the recorded data. Furthermore, the influence of the left leg potential on recording the precordial leads is presented, which lead to investigate whether the WCT and augmented vector foot (aVF) are proportional. Finally, a machine learning approach is proposed to minimise the Wilson Central Terminal

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202

    Learning Biosignals with Deep Learning

    Get PDF
    The healthcare system, which is ubiquitously recognized as one of the most influential system in society, is facing new challenges since the start of the decade.The myriad of physiological data generated by individuals, namely in the healthcare system, is generating a burden on physicians, losing effectiveness on the collection of patient data. Information systems and, in particular, novel deep learning (DL) algorithms have been prompting a way to take this problem. This thesis has the aim to have an impact in biosignal research and industry by presenting DL solutions that could empower this field. For this purpose an extensive study of how to incorporate and implement Convolutional Neural Networks (CNN), Recursive Neural Networks (RNN) and Fully Connected Networks in biosignal studies is discussed. Different architecture configurations were explored for signal processing and decision making and were implemented in three different scenarios: (1) Biosignal learning and synthesis; (2) Electrocardiogram (ECG) biometric systems, and; (3) Electrocardiogram (ECG) anomaly detection systems. In (1) a RNN-based architecture was able to replicate autonomously three types of biosignals with a high degree of confidence. As for (2) three CNN-based architectures, and a RNN-based architecture (same used in (1)) were used for both biometric identification, reaching values above 90% for electrode-base datasets (Fantasia, ECG-ID and MIT-BIH) and 75% for off-person dataset (CYBHi), and biometric authentication, achieving Equal Error Rates (EER) of near 0% for Fantasia and MIT-BIH and bellow 4% for CYBHi. As for (3) the abstraction of healthy clean the ECG signal and detection of its deviation was made and tested in two different scenarios: presence of noise using autoencoder and fully-connected network (reaching 99% accuracy for binary classification and 71% for multi-class), and; arrhythmia events by including a RNN to the previous architecture (57% accuracy and 61% sensitivity). In sum, these systems are shown to be capable of producing novel results. The incorporation of several AI systems into one could provide to be the next generation of preventive medicine, as the machines have access to different physiological and anatomical states, it could produce more informed solutions for the issues that one may face in the future increasing the performance of autonomous preventing systems that could be used in every-day life in remote places where the access to medicine is limited. These systems will also help the study of the signal behaviour and how they are made in real life context as explainable AI could trigger this perception and link the inner states of a network with the biological traits.O sistema de saúde, que é ubiquamente reconhecido como um dos sistemas mais influentes da sociedade, enfrenta novos desafios desde o ínicio da década. A miríade de dados fisiológicos gerados por indíviduos, nomeadamente no sistema de saúde, está a gerar um fardo para os médicos, perdendo a eficiência no conjunto dos dados do paciente. Os sistemas de informação e, mais espcificamente, da inovação de algoritmos de aprendizagem profunda (DL) têm sido usados na procura de uma solução para este problema. Esta tese tem o objetivo de ter um impacto na pesquisa e na indústria de biosinais, apresentando soluções de DL que poderiam melhorar esta área de investigação. Para esse fim, é discutido um extenso estudo de como incorporar e implementar redes neurais convolucionais (CNN), redes neurais recursivas (RNN) e redes totalmente conectadas para o estudo de biosinais. Diferentes arquiteturas foram exploradas para processamento e tomada de decisão de sinais e foram implementadas em três cenários diferentes: (1) Aprendizagem e síntese de biosinais; (2) sistemas biométricos com o uso de eletrocardiograma (ECG), e; (3) Sistema de detecção de anomalias no ECG. Em (1) uma arquitetura baseada na RNN foi capaz de replicar autonomamente três tipos de sinais biológicos com um alto grau de confiança. Quanto a (2) três arquiteturas baseadas em CNN e uma arquitetura baseada em RNN (a mesma usada em (1)) foram usadas para ambas as identificações, atingindo valores acima de 90 % para conjuntos de dados à base de eletrodos (Fantasia, ECG-ID e MIT -BIH) e 75 % para o conjunto de dados fora da pessoa (CYBHi) e autenticação, atingindo taxas de erro iguais (EER) de quase 0 % para Fantasia e MIT-BIH e abaixo de 4 % para CYBHi. Quanto a (3) a abstração de sinais limpos e assimptomáticos de ECG e a detecção do seu desvio foram feitas e testadas em dois cenários diferentes: na presença de ruído usando um autocodificador e uma rede totalmente conectada (atingindo 99 % de precisão na classificação binária e 71 % na multi-classe), e; eventos de arritmia incluindo um RNN na arquitetura anterior (57 % de precisão e 61 % de sensibilidade). Em suma, esses sistemas são mais uma vez demonstrados como capazes de produzir resultados inovadores. A incorporação de vários sistemas de inteligência artificial em um unico sistema pederá desencadear a próxima geração de medicina preventiva. Os algoritmos ao terem acesso a diferentes estados fisiológicos e anatómicos, podem produzir soluções mais informadas para os problemas que se possam enfrentar no futuro, aumentando o desempenho de sistemas autónomos de prevenção que poderiam ser usados na vida quotidiana, nomeadamente em locais remotos onde o acesso à medicinas é limitado. Estes sistemas também ajudarão o estudo do comportamento do sinal e como eles são feitos no contexto da vida real, pois a IA explicável pode desencadear essa percepção e vincular os estados internos de uma rede às características biológicas
    corecore