34 research outputs found

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Evolving Artificial Neural Networks using Cartesian Genetic Programming

    Get PDF
    NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. NeuroEvolution is thought to possess many benefits over traditional training methods including: the ability to train recurrent network structures, the capability to adapt network topology, being able to create heterogeneous networks of arbitrary transfer functions, and allowing application to reinforcement as well as supervised learning tasks. This thesis presents a series of rigorous empirical investigations into many of these perceived advantages of NeuroEvolution. In this work it is demonstrated that the ability to simultaneously adapt network topology along with connection weights represents a significant advantage of many NeuroEvolutionary methods. It is also demonstrated that the ability to create heterogeneous networks comprising a range of transfer functions represents a further significant advantage. This thesis also investigates many potential benefits and drawbacks of NeuroEvolution which have been largely overlooked in the literature. This includes the presence and role of genetic redundancy in NeuroEvolution's search and whether program bloat is a limitation. The investigations presented focus on the use of a recently developed NeuroEvolution method based on Cartesian Genetic Programming. This thesis extends Cartesian Genetic Programming such that it can represent recurrent program structures allowing for the creation of recurrent Artificial Neural Networks. Using this newly developed extension, Recurrent Cartesian Genetic Programming, and its application to Artificial Neural Networks, are demonstrated to be extremely competitive in the domain of series forecasting

    Deep spiking neural networks with applications to human gesture recognition

    Get PDF
    The spiking neural networks (SNNs), as the 3rd generation of Artificial Neural Networks (ANNs), are a class of event-driven neuromorphic algorithms that potentially have a wide range of application domains and are applicable to a variety of extremely low power neuromorphic hardware. The work presented in this thesis addresses the challenges of human gesture recognition using novel SNN algorithms. It discusses the design of these algorithms for both visual and auditory domain human gesture recognition as well as event-based pre-processing toolkits for audio signals. From the visual gesture recognition aspect, a novel SNN-based event-driven hand gesture recognition system is proposed. This system is shown to be effective in an experiment on hand gesture recognition with its spiking recurrent convolutional neural network (SCRNN) design, which combines both designed convolution operation and recurrent connectivity to maintain spatial and temporal relations with address-event-representation (AER) data. The proposed SCRNN architecture can achieve arbitrary temporal resolution, which means it can exploit temporal correlations between event collections. This design utilises a backpropagation-based training algorithm and does not suffer from gradient vanishing/explosion problems. From the audio perspective, a novel end-to-end spiking speech emotion recognition system (SER) is proposed. This system employs the MFCC as its main speech feature extractor as well as a self-designed latency coding algorithm to effciently convert the raw signal to AER input that can be used for SNN. A two-layer spiking recurrent architecture is proposed to address temporal correlations between spike trains. The robustness of this system is supported by several open public datasets, which demonstrate state of the arts recognition accuracy and a significant reduction in network size, computational costs, and training speed. In addition to directly contributing to neuromorphic SER, this thesis proposes a novel speech-coding algorithm based on the working mechanism of humans auditory organ system. The algorithm mimics the functionality of the cochlea and successfully provides an alternative method of event-data acquisition for audio-based data. The algorithm is then further simplified and extended into an application of speech enhancement which is jointly used in the proposed SER system. This speech-enhancement method uses the lateral inhibition mechanism as a frequency coincidence detector to remove uncorrelated noise in the time-frequency spectrum. The method is shown to be effective by experiments for up to six types of noise.The spiking neural networks (SNNs), as the 3rd generation of Artificial Neural Networks (ANNs), are a class of event-driven neuromorphic algorithms that potentially have a wide range of application domains and are applicable to a variety of extremely low power neuromorphic hardware. The work presented in this thesis addresses the challenges of human gesture recognition using novel SNN algorithms. It discusses the design of these algorithms for both visual and auditory domain human gesture recognition as well as event-based pre-processing toolkits for audio signals. From the visual gesture recognition aspect, a novel SNN-based event-driven hand gesture recognition system is proposed. This system is shown to be effective in an experiment on hand gesture recognition with its spiking recurrent convolutional neural network (SCRNN) design, which combines both designed convolution operation and recurrent connectivity to maintain spatial and temporal relations with address-event-representation (AER) data. The proposed SCRNN architecture can achieve arbitrary temporal resolution, which means it can exploit temporal correlations between event collections. This design utilises a backpropagation-based training algorithm and does not suffer from gradient vanishing/explosion problems. From the audio perspective, a novel end-to-end spiking speech emotion recognition system (SER) is proposed. This system employs the MFCC as its main speech feature extractor as well as a self-designed latency coding algorithm to effciently convert the raw signal to AER input that can be used for SNN. A two-layer spiking recurrent architecture is proposed to address temporal correlations between spike trains. The robustness of this system is supported by several open public datasets, which demonstrate state of the arts recognition accuracy and a significant reduction in network size, computational costs, and training speed. In addition to directly contributing to neuromorphic SER, this thesis proposes a novel speech-coding algorithm based on the working mechanism of humans auditory organ system. The algorithm mimics the functionality of the cochlea and successfully provides an alternative method of event-data acquisition for audio-based data. The algorithm is then further simplified and extended into an application of speech enhancement which is jointly used in the proposed SER system. This speech-enhancement method uses the lateral inhibition mechanism as a frequency coincidence detector to remove uncorrelated noise in the time-frequency spectrum. The method is shown to be effective by experiments for up to six types of noise

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Stochastic dynamics and delta-band oscillations in clustered spiking networks

    Get PDF
    Following experimental measurements of clustered connectivity in the cortex, recent studies have found that clustering connections in simulated spiking networks causes transitions between high and low firing-rate states in subgroups of neurons. An open question is to what extent the sequence of transitions in such networks can be related to existing statistical and mechanical models of sequence generation. In this thesis we present several studies of the relationship between connection structure and network dynamics in balanced spiking networks. We investigate which qualities of the network connection matrix support the generation of state sequences, and which properties determine the specific structure of transitions between states. We find that adding densely overlapping clusters with equal levels of recurrent connectivity to a network with dense inhibition can produce sequential winner-takes-all dynamics in which high-activity states pass between correlated clusters. This activity is reflected in the power spectrum of spiking activity as a peak in the low-frequency delta range. We describe and verify sequence dynamics with a Markov chain framework, and compare them mechanically to “latching” models of sequence generation. Additionally we quantify the chaos of clustered networks and find that minimally separated states diverge in distinct stages. The results clarify the computational capabilities of clustered spiking networks and their relationship to experimental findings. We conclude that the results provide a supporting intermediate link between abstract models and biological instances of sequence generation

    Perspective on unconventional computing using magnetic skyrmions

    Full text link
    Learning and pattern recognition inevitably requires memory of previous events, a feature that conventional CMOS hardware needs to artificially simulate. Dynamical systems naturally provide the memory, complexity, and nonlinearity needed for a plethora of different unconventional computing approaches. In this perspective article, we focus on the unconventional computing concept of reservoir computing and provide an overview of key physical reservoir works reported. We focus on the promising platform of magnetic structures and, in particular, skyrmions, which potentially allow for low-power applications. Moreover, we discuss skyrmion-based implementations of Brownian computing, which has recently been combined with reservoir computing. This computing paradigm leverages the thermal fluctuations present in many skyrmion systems. Finally, we provide an outlook on the most important challenges in this field.Comment: 19 pages and 3 figure

    Special Topics in Information Technology

    Get PDF
    This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2019-20 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists

    Architectures and Algorithms for Intrinsic Computation with Memristive Devices

    Get PDF
    Neuromorphic engineering is the research field dedicated to the study and design of brain-inspired hardware and software tools. Recent advances in emerging nanoelectronics promote the implementation of synaptic connections based on memristive devices. Their non-volatile modifiable conductance was shown to exhibit the synaptic properties often used in connecting and training neural layers. With their nanoscale size and non-volatile memory property, they promise a next step in designing more area and energy efficient neuromorphic hardware. My research deals with the challenges of harnessing memristive device properties that go beyond the behaviors utilized for synaptic weight storage. Based on devices that exhibit non-linear state changes and volatility, I present novel architectures and algorithms that can harness such features for computation. The crossbar architecture is a dense array of memristive devices placed in-between horizontal and vertical nanowires. The regularity of this structure does not inherently provide the means for nonlinear computation of applied input signals. Introducing a modulation scheme that relies on nonlinear memristive device properties, heterogeneous state patterns of applied spatiotemporal input data can be created within the crossbar. In this setup, the untrained and dynamically changing states of the memristive devices offer a useful platform for information processing. Based on the MNIST data set I\u27ll demonstrate how the temporal aspect of memristive state volatility can be utilized to reduce system size and training complexity for high dimensional input data. With 3 times less neurons and 15 times less synapses to train as compared to other memristor-based implementations, I achieve comparable classification rates of up to 93%. Exploiting dynamic state changes rather than precisely tuned stable states, this approach can tolerate device variation up to 6 times higher than reported levels. Random assemblies of memristive networks are analyzed as a substrate for intrinsic computation in connection with reservoir computing; a computational framework that harnesses observations of inherent dynamics within complex networks. Architectural and device level considerations lead to new levels of task complexity, which random memristive networks are now able to solve. A hierarchical design composed of independent random networks benefits from a diverse set of topologies and achieves prediction errors (NRMSE) on the time-series prediction task NARMA-10 as low as 0.15 as compared to 0.35 for an echo state network. Physically plausible network modeling is performed to investigate the relationship between network dynamics and energy consumption. Generally, increased network activity comes at the cost of exponentially increasing energy consumption due to nonlinear voltage-current characteristics of memristive devices. A trade-off, that allows linear scaling of energy consumption, is provided by the hierarchical approach. Rather than designing individual memristive networks with high switching activity, a collection of less dynamic, but independent networks can provide more diverse network activity per unit of energy. My research extends the possibilities of including emerging nanoelectronics into neuromorphic hardware. It establishes memristive devices beyond storage and motivates future research to further embrace memristive device properties that can be linked to different synaptic functions. Pursuing to exploit the functional diversity of memristive devices will lead to novel architectures and algorithms that study rather than dictate the behavior of such devices, with the benefit of creating robust and efficient neuromorphic hardware

    The Fuzziness in Molecular, Supramolecular, and Systems Chemistry

    Get PDF
    Fuzzy Logic is a good model for the human ability to compute words. It is based on the theory of fuzzy set. A fuzzy set is different from a classical set because it breaks the Law of the Excluded Middle. In fact, an item may belong to a fuzzy set and its complement at the same time and with the same or different degree of membership. The degree of membership of an item in a fuzzy set can be any real number included between 0 and 1. This property enables us to deal with all those statements of which truths are a matter of degree. Fuzzy logic plays a relevant role in the field of Artificial Intelligence because it enables decision-making in complex situations, where there are many intertwined variables involved. Traditionally, fuzzy logic is implemented through software on a computer or, even better, through analog electronic circuits. Recently, the idea of using molecules and chemical reactions to process fuzzy logic has been promoted. In fact, the molecular word is fuzzy in its essence. The overlapping of quantum states, on the one hand, and the conformational heterogeneity of large molecules, on the other, enable context-specific functions to emerge in response to changing environmental conditions. Moreover, analog input–output relationships, involving not only electrical but also other physical and chemical variables can be exploited to build fuzzy logic systems. The development of “fuzzy chemical systems” is tracing a new path in the field of artificial intelligence. This new path shows that artificially intelligent systems can be implemented not only through software and electronic circuits but also through solutions of properly chosen chemical compounds. The design of chemical artificial intelligent systems and chemical robots promises to have a significant impact on science, medicine, economy, security, and wellbeing. Therefore, it is my great pleasure to announce a Special Issue of Molecules entitled “The Fuzziness in Molecular, Supramolecular, and Systems Chemistry.” All researchers who experience the Fuzziness of the molecular world or use Fuzzy logic to understand Chemical Complex Systems will be interested in this book
    corecore