882 research outputs found
Neuromodulatory effects on early visual signal processing
Understanding how the brain processes information and generates simple to complex behavior constitutes one of the core objectives in systems neuroscience. However, when studying different neural circuits, their dynamics and interactions researchers often assume fixed connectivity, overlooking a crucial factor - the effect of neuromodulators. Neuromodulators can modulate circuit activity depending on several aspects, such as different brain states or sensory contexts. Therefore, considering the modulatory effects of neuromodulators on the functionality of neural circuits is an indispensable step towards a more complete picture of the brain’s ability to process information. Generally, this issue affects all neural systems; hence this thesis tries to address this with an experimental and computational approach to resolve neuromodulatory effects on cell type-level in a well-define system, the mouse retina. In the first study, we established and applied a machine-learning-based classification algorithm to identify individual functional retinal ganglion cell types, which enabled detailed cell type-resolved analyses. We applied the classifier to newly acquired data of light-evoked retinal ganglion cell responses and successfully identified their functional types. Here, the cell type-resolved analysis revealed that a particular principle of efficient coding applies to all types in a similar way. In a second study, we focused on the issue of inter-experimental variability that can occur during the process of pooling datasets. As a result, further downstream analyses may be complicated by the subtle variations between the individual datasets. To tackle this, we proposed a theoretical framework based on an adversarial autoencoder with the objective to remove inter-experimental variability from the pooled dataset, while preserving the underlying biological signal of interest. In the last study of this thesis, we investigated the functional effects of the neuromodulator nitric oxide on the retinal output signal. To this end, we used our previously developed retinal ganglion cell type classifier to unravel type-specific effects and established a paired recording protocol to account for type-specific time-dependent effects. We found that certain
retinal ganglion cell types showed adaptational type-specific changes and that nitric oxide had a distinct modulation of a particular group of retinal ganglion cells.
In summary, I first present several experimental and computational methods that allow to
study functional neuromodulatory effects on the retinal output signal in a cell type-resolved manner and, second, use these tools to demonstrate their feasibility to study the neuromodulator nitric oxide
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Detecting Team Conflict From Multiparty Dialogue
The emergence of online collaboration platforms has dramatically changed the dynamics of human teamwork, creating a veritable army of virtual teams composed of workers in different physical locations. The global world requires a tremendous amount of collaborative problem solving, primarily virtual, making it an excellent domain for computer scientists and team cognition researchers who seek to understand the dynamics involved in collaborative tasks to provide a solution that can support effective collaboration. Mining and analyzing data from collaborative dialogues can yield insights into virtual teams\u27 thought processes and help develop virtual agents to support collaboration. Good communication is indubitably the foundation of effective collaboration. Over time teams develop their own communication styles and often exhibit entrainment, a conversational phenomenon in which humans synchronize their linguistic choices. This dissertation presents several technical innovations in the usage of machine learning towards analyzing, monitoring, and predicting collaboration success from multiparty dialogue by successfully handling the problems of resource scarcity and natural distribution shifts. First, we examine the problem of predicting team performance from embeddings learned from multiparty dialogues such that teams with similar conflict scores lie close to one another in vector space. We extract the embeddings from three types of features: 1) dialogue acts 2) sentiment polarity 3) syntactic entrainment. Although all of these features can be used to predict team performance effectively, their utility varies by the teamwork phase. We separate the dialogues of players playing a cooperative game into stages: 1) early (knowledge building), 2) middle (problem-solving), and 3) late (culmination). Unlike syntactic entrainment, both dialogue act and sentiment embeddings effectively classify team performance, even during the initial phase. Second, we address the problem of learning generalizable models of collaboration. Machine learning models often suffer domain shifts; one advantage of encoding the semantic features is their adaptability across multiple domains. We evaluate the generalizability of different embeddings to other goal-oriented teamwork dialogues. Finally, in addition to identifying the features predictive of successful collaboration, we propose multi-feature embedding (MFeEmb) to improve the generalizability of collaborative task success prediction models under natural distribution shifts and resource scarcity. MFeEmb leverages the strengths of semantic, structural, and textual features of the dialogues by incorporating the most meaningful information from dialogue acts (DAs), sentiment polarities, and vocabulary of the dialogues. To further enhance the performance of MFeEmb under a resource-scarce scenario, we employ synthetic data generation and few-shot learning. We use the method proposed by Bailey and Chopra (2018) for few-shot learning from the FsText python library. We replaced the universal embedding with our proposed multi-feature embedding to compare the performance of the two. For data augmentation, we propose using synonym replacement from collaborative dialogue vocabulary instead of synonym replacement from WordNet. The research was conducted on several multiparty dialogue datasets, including ASIST, SwDA, Hate Speech, Diplomacy, Military, SAMSum, AMI, and GitHub. Results show that the proposed multi-feature embedding is an excellent choice for the meta-training stage of the few-shot learning, even if it learns from a small train set of size as small as 62 samples. Also, our proposed data augmentation method showed significant performance improvement. Our research has potential ramifications for the development of conversational agents that facilitate teaming as well as towards the creation of more effective social coding platforms to better support teamwork between software engineers
Modular lifelong machine learning
Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge.
Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand.
This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems.
First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures.
Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations.
Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods.
Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer
Learning Disentangled Representation with Mutual Information Maximization for Real-Time UAV Tracking
Efficiency has been a critical problem in UAV tracking due to limitations in
computation resources, battery capacity, and unmanned aerial vehicle maximum
load. Although discriminative correlation filters (DCF)-based trackers prevail
in this field for their favorable efficiency, some recently proposed
lightweight deep learning (DL)-based trackers using model compression
demonstrated quite remarkable CPU efficiency as well as precision.
Unfortunately, the model compression methods utilized by these works, though
simple, are still unable to achieve satisfying tracking precision with higher
compression rates. This paper aims to exploit disentangled representation
learning with mutual information maximization (DR-MIM) to further improve
DL-based trackers' precision and efficiency for UAV tracking. The proposed
disentangled representation separates the feature into an identity-related and
an identity-unrelated features. Only the latter is used, which enhances the
effectiveness of the feature representation for subsequent classification and
regression tasks. Extensive experiments on four UAV benchmarks, including
UAV123@10fps, DTB70, UAVDT and VisDrone2018, show that our DR-MIM tracker
significantly outperforms state-of-the-art UAV tracking methods
A 3D explainability framework to uncover learning patterns and crucial sub-regions in variable sulci recognition
Precisely identifying sulcal features in brain MRI is made challenging by the
variability of brain folding. This research introduces an innovative 3D
explainability frame-work that validates outputs from deep learning networks in
their ability to detect the paracingulate sulcus, an anatomical feature that
may or may not be present on the frontal medial surface of the human brain.
This study trained and tested two networks, amalgamating local explainability
techniques GradCam and SHAP with a dimensionality reduction method. The
explainability framework provided both localized and global explanations, along
with accuracy of classification results, revealing pertinent sub-regions
contributing to the decision process through a post-fusion transformation of
explanatory and statistical features. Leveraging the TOP-OSLO dataset of MRI
acquired from patients with schizophrenia, greater accuracies of paracingulate
sulcus detection (presence or absence) were found in the left compared to right
hemispheres with distinct, but extensive sub-regions contributing to each
classification outcome. The study also inadvertently highlighted the critical
role of an unbiased annotation protocol in maintaining network performance
fairness. Our proposed method not only offers automated, impartial annotations
of a variable sulcus but also provides insights into the broader anatomical
variations associated with its presence throughout the brain. The adoption of
this methodology holds promise for instigating further explorations and
inquiries in the field of neuroscience
Resource efficient action recognition in videos
This thesis traces an innovative journey in the domain of real-world action recognition, in particular focusing on memory and data efficient systems. It begins by introducing a novel approach for smart frame selection, which significantly reduces computational costs in video classification. It further optimizes the action recognition process by addressing the challenges of training time and memory consumption in video transformers, laying a strong foundation for memory efficient action recognition.
The thesis then delves into zero-shot learning, focusing on the flaws of the currently existing protocol and establishing a new split for true zero-shot action recognition, ensuring zero overlap between unseen test classes and training or pre-training classes. Building on this, a unique cluster-based representation, optimized using reinforcement learning, is proposed for zero-shot action recognition. Crucially, we show that a joint
visual-semantic representation learning is essential for improved performance. We also experiment with feature generation approaches for zero-shot action recognition by introducing a synthetic sample selection methodology extending the utility of zero-shot learning to both images and videos and selecting high-quality samples for synthetic data augmentation. This form of data valuation is then incorporated for our novel video data augmentation approach where we generate video composites using foreground and background mixing of videos. The data valuation helps us choose good composites at a reduced overall cost. Finally, we propose the creation of a meaningful semantic space for action labels. We create a textual description dataset for each action class and propose a novel feature generating approach to maximise the benefits of this semantic space. The research contributes significantly to the field, potentially paving the way for more efficient, resource-friendly, and robust video processing and understanding techniques
Conditional Invertible Generative Models for Supervised Problems
Invertible neural networks (INNs), in the setting of normalizing flows, are a type of unconditional generative likelihood model. Despite various attractive properties compared to other common generative model types, they are rarely useful for supervised tasks or real applications due to their unguided outputs. In this work, we therefore present three new methods that extend the standard INN setting, falling under a broader category we term generative invertible models. These new methods allow leveraging the theoretical and practical benefits of INNs to solve supervised problems in new ways, including real-world applications from different branches of science. The key finding is that our approaches enhance many aspects of trustworthiness in comparison to conventional feed-forward networks, such as uncertainty estimation and quantification, explainability, and proper handling of outlier data
- …