335 research outputs found

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    Bayesian sparsity and class sparsity priors for dictionary learning and coding

    Full text link
    Dictionary learning methods continue to gain popularity for the solution of challenging inverse problems. In the dictionary learning approach, the computational forward model is replaced by a large dictionary of possible outcomes, and the problem is to identify the dictionary entries that best match the data, akin to traditional query matching in search engines. Sparse coding techniques are used to guarantee that the dictionary matching identifies only few of the dictionary entries, and dictionary compression methods are used to reduce the complexity of the matching problem. In this article, we propose a work flow to facilitate the dictionary matching process. First, the full dictionary is divided into subdictionaries that are separately compressed. The error introduced by the dictionary compression is handled in the Bayesian framework as a modeling error. Furthermore, we propose a new Bayesian data-driven group sparsity coding method to help identify subdictionaries that are not relevant for the dictionary matching. After discarding irrelevant subdictionaries, the dictionary matching is addressed as a deflated problem using sparse coding. The compression and deflation steps can lead to substantial decreases of the computational complexity. The effectiveness of compensating for the dictionary compression error and using the novel group sparsity promotion to deflate the original dictionary are illustrated by applying the methodology to real world problems, the glitch detection in the LIGO experiment and hyperspectral remote sensing

    Exercise-Induced Hypoalgesia in people with chronic low back pain

    Get PDF
    Chronic low back pain (CLBP) is one of the most prevalent musculoskeletal disorders and a major contributor to disability worldwide. Exercise is recommended in guidelines as a cornerstone of the management of CLBP. One of the manifold benefits of exercise is its influence on endogenous pain modulation. An acute bout of exercise elicits a temporary decrease in pain sensitivity, described as exercise-induced hypoalgesia (EIH). This thesis explores EIH in people with CLBP via a systematic review and observational studies. The systematic review included 17 studies in people with spinal pain. Of those, four studies considered people with CLBP revealing very low quality evidence with conflicting results. EIH was elicited following remote cycling tasks (two studies, fair risk of bias), but EIH was altered following local repetitive lifting tasks (two studies, good/fair risk of bias). The observational studies investigated EIH following three different tasks in participants with and without CLBP and explored the stability of EIH results. Conflicting results from quantitative sensory testing were found for whether EIH is impaired in people with CLBP. EIH was only elicited in asymptomatic participants following a repeated lifting task, but both participants with and without CLBP showed EIH following a lumbar resistance and a brisk walking task. This thesis demonstrates the first evidence of stability of EIH over multiple sessions. However, the interpretation of the results can be challenging as stability was poor and changes in lumbar pressure pain thresholds also occurred after rest only. These findings are important to inform future studies contributing to the elucidation of the complex phenomenon of EIH in people with/without CLBP, specifically as the stability is a prerequisite for future research

    An information-theoretic approach to understanding the neural coding of relevant tactile features

    Get PDF
    Objective: Traditional theories in neuroscience state that tactile afferents present in the glabrous skin of the human hand encode tactile information following a submodality segregation strategy, meaning that each modality (eg. motion, vibration, shape, ... ) is encoded by a different afferent class. Modern theories suggest a submodality convergence instead, in which different afferent classes work together to capture information about the environment through tactile sense. Typically, studies involve electrophysiological recordings of tens of afferents. At the same time, the human hand is filled with around 17.000 afferents. In this thesis, we want to tackle the theoretical gap this poses. Specifically, we aim to address whether the peripheral nervous system relies on population coding to represent tactile information and whether such population coding enables us to disambiguate submodality convergence against the classical segregation. Approach: Understanding the encoding and flow of information in the nervous system is one of the main challenges of modern neuroscience. Neural signals are highly variable and may be non-linear. Moreover, there exist several candidate codes compatible with sensory and behavioral events. For example, they can rely on single cells or populations and also on rate or timing precision. Information-theoretic methods can capture non-linearities while being model independent, statistically robust, and mathematically well-grounded, becoming an ideal candidate to design pipelines for analyzing neural data. Despite information-theoretic methods being powerful for our objective, the vast majority of neural signals we can acquire from living systems makes analyses highly problem-specific. This is so because of the rich variety of biological processes that are involved (continuous, discrete, electrical, chemical, optical, ...). Main results: The first step towards solving the aforementioned challenges was to have a solid methodology we could trust and rely on. Consequently, the first deliverable from this thesis is a toolbox that gathers classical and state-of-the-art information-theoretic approaches and blends them with advanced machine learning tools to process and analyze neural data. Moreover, this toolbox also provides specific guidance on calcium imaging and electrophysiology analyses, encompassing both simulated and experimental data. We then designed an information-theoretic pipeline to analyze large-scale simulations of the tactile afferents that overcomes the current limitations of experimental studies in the field of touch and the peripheral nervous system. We dissected the importance of population coding for the different afferent classes, given their spatiotemporal dynamics. We also demonstrated that different afferent classes encode information simultaneously about very simple features, and that combining classes increases information levels, adding support to the submodality convergence theory. Significance: Fundamental knowledge about touch is essential both to design human-like robots exhibiting naturalistic exploration behavior and prostheses that can properly integrate and provide their user with relevant and useful information to interact with their environment. Demonstrating that the peripheral nervous system relies on heterogeneous population coding can change the designing paradigm of artificial systems, both in terms of which sensors to choose and which algorithms to use, especially in neuromorphic implementations

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Generative Model based Training of Deep Neural Networks for Event Detection in Microscopy Data

    Get PDF
    Several imaging techniques employed in the life sciences heavily rely on machine learning methods to make sense of the data that they produce. These include calcium imaging and multi-electrode recordings of neural activity, single molecule localization microscopy, spatially-resolved transcriptomics and particle tracking, among others. All of them only produce indirect readouts of the spatiotemporal events they aim to record. The objective when analysing data from these methods is the identification of patterns that indicate the location of the sought-after events, e.g. spikes in neural recordings or fluorescent particles in microscopy data. Existing approaches for this task invert a forward model, i.e. a mathematical description of the process that generates the observed patterns for a given set of underlying events, using established methods like MCMC or variational inference. Perhaps surprisingly, for a long time deep learning saw little use in this domain, even though it became the dominant approach in the field of pattern recognition over the previous decade. The principal reason is that in the absence of labeled data needed for supervised optimization it remains unclear how neural networks can be trained to solve these tasks. To unlock the potential of deep learning, this thesis proposes different methods for training neural networks using forward models and without relying on labeled data. The thesis rests on two publications: In the first publication we introduce an algorithm for spike extraction from calcium imaging time traces. Building on the variational autoencoder framework, we simultaneously train a neural network that performs spike inference and optimize the parameters of the forward model. This approach combines several advantages that were previously incongruous: it is fast at test-time, can be applied to different non-linear forward models and produces samples from the posterior distribution over spike trains. The second publication deals with the localization of fluorescent particles in single molecule localization microscopy. We show that an accurate forward model can be used to generate simulations that act as a surrogate for labeled training data. Careful design of the output representation and loss function result in a method with outstanding precision across experimental designs and imaging conditions. Overall this thesis highlights how neural networks can be applied for precise, fast and flexible model inversion on this class of problems and how this opens up new avenues to achieve performance beyond what was previously possible

    Measuring spectrally resolved information processing in neural data

    Get PDF
    Background: The human brain, an incredibly complex biological system comprising billions of neurons and trillions of synapses, possesses remarkable capabilities for information processing and distributed computations. Neurons, the fundamental building blocks, perform elementary operations on their inputs and collaborate extensively to execute intricate computations, giving rise to cognitive functions and behavior. Notably, distributed information processing in the brain heavily relies on rhythmic neural activity characterized by synchronized oscillations at specific frequencies. These oscillations play a crucial role in coordinating brain activity and facilitating communication between different neural circuits [1], effectively acting as temporal windows that enable efficient information exchange within specific frequency ranges. To understand distributed information processing in neural systems, breaking down its components, i.e., —information transfer, storage, and modification can be helpful, but requires precise mathematical definitions for each respective component. Thankfully, these definitions have recently become available [2]. Information theory is a natural choice for measuring information processing, as it offers a mathematically complete description of the concept of information and communication. The fundamental information-processing operations, are considered essential prerequisites for achieving universal information processing in any system [3]. By quantifying and analyzing these operations, we gain valuable insights into the brain’s complex computation and cognitive abilities. As information processing in the brain is intricately tied to rhythmic behavior, there is a need to establish a connection between information theoretic measures and frequency components. Previous attempts to achieve frequency-resolved information theoretic measures have mostly relied on narrowband filtering [4], which comes with several known issues of phase shifting and high false positive rate results [5], or simplifying the computation to few variables [6], that might result in missing important information in the analysed brain signals. Therefore, the current work aims to establish a frequency-resolved measure of two crucial components of information processing: information transfer and information storage. By proposing methodological advancements, this research seeks to shed light on the role of neural oscillations in information processing within the brain. Furthermore, a more comprehensive investigation was carried out on the communication between two critical brain regions responsible for motor inhibition in the frontal cortex (right Inferior Frontal gyrus (rIFG) and pre-Supplementary motor cortex (pre-SMA)). Here, neural oscillations in the beta band (12 − 30 Hz) have been proposed to have a pivotal role in response inhibition. A long-standing question in the field was to disentangle which of these two brain areas first signals the stopping process and drives the other [7]. Furthermore, it was hypothesized that beta oscillations carry the information transfer between these regions. The present work addresses the methodological problems and investigates spectral information processing in neural data, in three studies. Study 1 focuses on the critical role of information transfer, measured by transfer entropy, in distributed computation. Understanding the patterns of information transfer is essential for unraveling the computational algorithms in complex systems, such as the brain. As many natural systems rely on rhythmic processes for distributed computations, a frequency-resolved measure of information transfer becomes highly valuable. To address this, a novel algorithm is presented, efficiently identifying frequencies responsible for sending and receiving information in a network. The approach utilizes the invertible maximum overlap discrete wavelet transform (MODWT) to create surrogate data for computing transfer entropy, eliminating issues associated with phase shifts and filtering. However, measuring frequency-resolved information transfer poses a Partial information decomposition problem [8] that is yet to be fully resolved. The algorithm’s performance is validated using simulated data and applied to human magnetoencephalography (MEG) and ferret local field potential recordings (LFP). In human MEG, the study unveils a complex spectral configuration of cortical information transmission, showing top-down information flow from very high frequencies (above 100Hz) to both similarly high frequencies and frequencies around 20Hz in the temporal cortex. Contrary to the current assumption, the findings suggest that low frequencies do not solely send information to high frequencies. In the ferret LFP, the prefrontal cortex demonstrates the transmission of information at low frequencies, specifically within the range of 4-8 Hz. On the receiving end, V1 exhibits a preference for operating at very high frequency > 125 Hz. The spectrally resolved transfer entropy promises to deepen our understanding of rhythmic information exchange in natural systems, shedding light on the computational properties of oscillations on cognitive functions. In study 2, the primary focus lay on the second fundamental aspect of information processing: the active information storage (AIS). The AIS estimates how much information in the next measurements of the process can be predicted by examining its paste state. In processes that either produce little information (low entropy) or that are highly unpredictable, the AIS is low, whereas processes that are predictable but visit many different states with equal probabilities, exhibit high AIS [9]. Within this context, we introduced a novel spectrally-resolved AIS. Utilizing intracortical recordings of neural activity in anesthetized ferrets before and after loss of consciousness (LOC), the study reveals that the modulation of AIS by anesthesia is highly specific to different frequency bands, cortical layers, and brain regions. The findings reveal that the effects of anesthesia on AIS are prominent in the supragranular layers for the high/low gamma band, while the alpha/beta band exhibits the strongest decrease in AIS at infragranular layers, in accordance with the predictive coding theory. Additionally, the isoflurane impacts local information processing in a frequency-specific manner. For instance, increases in isoflurane concentration lead to a decrease in AIS in the alpha frequency but to an increase in AIS in the delta frequency range (<2Hz). In sum, analyzing spectrally-resolved AIS provides valuable insights into changes in cortical information processing under anesthesia. With rhythmic neural activity playing a significant role in biological neural systems, the introduction of frequency-specific components in active information storage allows a deeper understanding of local information processing in different brain areas and under various conditions. In study 3, to further verify the pivotal role of neural oscillations in information processing, we investigated the neural network mechanisms underlying response inhibition. A long-standing debate has centered around identifying the cortical initiator of response inhibition in the beta band, with two main regions proposed: the right rIFG and the pre-SMA. This third study aimed to determine which of these regions is activated first and exerts a potential information exchange on the other. Using high temporal resolution magnetoencephalography (MEG) and a relatively large cohort of subjects. A significant breakthrough is achieved by demonstrating that the rIFG is activated significantly earlier than the pre-SMA. The onset of beta band activity in the rIFG occurred at around 140 ms after the STOP signal. Further analyses showed that the beta-band activity in the rIFG was crucial for successful stopping, as evidenced by its predictive value for stopping performance. Connectivity analysis revealed that the rIFG sends information in the beta band to the pre-SMA but not vice versa, emphasizing the rIFG’s dominance in the response inhibition process. The results provide strong support for the hypothesis that the rIFG initiates stopping and utilizes beta-band oscillations for this purpose. These findings have significant implications, suggesting the possibility of spatially localized oscillation based interventions for response inhibition. Conclusion: In conclusion, the present work proposes a novel algorithm for uncovering the frequencies at which information is transferred between sources and targets in the brain, providing valuable insights into the computational dynamics of neural processes. The spectrally resolved transfer entropy was successfully applied to experimental neural data of intracranial recordings in ferrets and MEG recordings of humans. Furthermore, the study on active information storage (AIS) analysis under anesthesia revealed that the spectrally resolved AIS offers unique additional insights beyond traditional spectral power analysis. By examining changes in neural information processing, the study demonstrates how AIS analysis can deepen the understanding of anesthesia’s effects on cortical information processing. Moreover, the third study’s findings provide strong evidence supporting the critical role of beta oscillations in information processing, particularly in response inhibition. The research successfully demonstrates that beta oscillations in the rIFG functions as the key initiator of the response inhibition process, acting as a top-down control mechanism. The identification of beta oscillations as a crucial factor in information processing opens possibilities for further research and targeted interventions in neurological disorders. Taken together, the current work highlights the role of spectrally-resolved information processing in neural systems by not only introducing novel algorithms, but also successfully applying them to experimental oscillatory neural activity in relation to low-level cortical information processing (anesthesia) as well as high-level processes (cognitive response inhibition)
    • 

    corecore