1,035 research outputs found
Computational techniques to interpret the neural code underlying complex cognitive processes
Advances in large-scale neural recording technology have significantly improved the
capacity to further elucidate the neural code underlying complex cognitive processes.
This thesis aimed to investigate two research questions in rodent models. First, what
is the role of the hippocampus in memory and specifically what is the underlying
neural code that contributes to spatial memory and navigational decision-making.
Second, how is social cognition represented in the medial prefrontal cortex at the
level of individual neurons. To start, the thesis begins by investigating memory and
social cognition in the context of healthy and diseased states that use non-invasive
methods (i.e. fMRI and animal behavioural studies). The main body of the thesis
then shifts to developing our fundamental understanding of the neural mechanisms
underpinning these cognitive processes by applying computational techniques to ana lyse stable large-scale neural recordings. To achieve this, tailored calcium imaging
and behaviour preprocessing computational pipelines were developed and optimised
for use in social interaction and spatial navigation experimental analysis. In parallel,
a review was conducted on methods for multivariate/neural population analysis. A
comparison of multiple neural manifold learning (NML) algorithms identified that non linear algorithms such as UMAP are more adaptable across datasets of varying noise
and behavioural complexity. Furthermore, the review visualises how NML can be
applied to disease states in the brain and introduces the secondary analyses that
can be used to enhance or characterise a neural manifold. Lastly, the preprocessing
and analytical pipelines were combined to investigate the neural mechanisms in volved in social cognition and spatial memory. The social cognition study explored
how neural firing in the medial Prefrontal cortex changed as a function of the social
dominance paradigm, the "Tube Test". The univariate analysis identified an ensemble
of behavioural-tuned neurons that fire preferentially during specific behaviours such
as "pushing" or "retreating" for the animalâs own behaviour and/or the competitorâs
behaviour. Furthermore, in dominant animals, the neural population exhibited greater
average firing than that of subordinate animals. Next, to investigate spatial memory,
a spatial recency task was used, where rats learnt to navigate towards one of three
reward locations and then recall the rewarded location of the session. During the
task, over 1000 neurons were recorded from the hippocampal CA1 region for five rats
over multiple sessions. Multivariate analysis revealed that the sequence of neurons encoding an animalâs spatial position leading up to a rewarded location was also active
in the decision period before the animal navigates to the rewarded location. The result
posits that prospective replay of neural sequences in the hippocampal CA1 region
could provide a mechanism by which decision-making is supported
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steerâa gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Coordinate-Descent Augmented Lagrangian Methods for Interpretative and Adaptive Model Predictive Control
Model predictive control (MPC) of nonlinear systems suffers
a trade-off between model accuracy and real-time compu-
tational burden. This thesis presents an interpretative and
adaptive MPC (IA-MPC) framework for nonlinear systems,
which is related to the widely used approximation method
based on successive linearization MPC and Extended Kalman
Filtering (SL-MPC-EKF). First, we introduce a solution algo-
rithm for linear MPC that is based on the combination of Co-
ordinate Descent and Augmented Lagrangian (CDAL) ideas.
The CDAL algorithm enjoys three features: (i) it is construction-free, in that it avoids explicitly constructing the quadratic pro-gramming (QP) problem associated with MPC; (ii) is matrix-free, as it avoids multiplications and factorizations of matri-ces; and (iii) is library-free, as it can be simply coded without any library dependency, 90-lines of C-code in our implemen-tation. We specialize the algorithm for both state-space for-mulations of MPC and formulations based on AutoRegres-sive with eXogenous terms models (CDAL-ARX). The thesis also presents a rapid-prototype MPC tool based on the gPROMS platform, in which the qpOASES and CDAL algorithm was integrated. In addition, based on an equivalence between SS-based and ARX-based MPC problems we show,we investigate the relation between the proposed IA-MPC and the classical SL-MPC-EKF method. Finally, we test and show the effectiveness of the proposed IA-MPC frameworkon four typical nonlinear MPC benchmark examples
Exploring Hyperspectral Imaging and 3D Convolutional Neural Network for Stress Classification in Plants
Hyperspectral imaging (HSI) has emerged as a transformative technology in imaging, characterized by its ability to capture a wide spectrum of light, including wavelengths beyond the visible range. This approach significantly differs from traditional imaging methods such as RGB imaging, which uses three color channels, and multispectral imaging, which captures several discrete spectral bands. Through this approach, HSI offers detailed spectral signatures for each pixel, facilitating a more nuanced analysis of the imaged subjects. This capability is particularly beneficial in applications like agricultural practices, where it can detect changes in physiological and structural characteristics of crops. Moreover, the ability of HSI to monitor these changes over time is advantageous for observing how subjects respond to different environmental conditions or treatments. However, the high-dimensional nature of hyperspectral data presents challenges in data processing and feature extraction. Traditional machine learning algorithms often struggle to handle such complexity. This is where 3D Convolutional Neural Networks (CNNs) become valuable. Unlike 1D-CNNs, which extract features from spectral dimensions, and 2D-CNNs, which focus on spatial dimensions, 3D CNNs have the capability to process data across both spectral and spatial dimensions. This makes them adept at extracting complex features from hyperspectral data. In this thesis, we explored the potency of HSI combined with 3D-CNN in agriculture domain where plant health and vitality are paramount. To evaluate this, we subjected lettuce plants to varying stress levels to assess the performance of this method in classifying the stressed lettuce at the early stages of growth into their respective stress-level groups. For this study, we created a dataset comprising 88 hyperspectral image samples of stressed lettuce. Utilizing Bayesian optimization, we developed 350 distinct 3D-CNN models to assess the method. The top-performing model achieved a 75.00\% test accuracy. Additionally, we addressed the challenge of generating valid 3D-CNN models in the Keras Tuner library through meticulous hyperparameter configuration. Our investigation also extends to the role of individual channels and channel groups within the color and near-infrared spectrum in predicting results for each stress-level group. We observed that the red and green spectra have a higher influence on the prediction results. Furthermore, we conducted a comprehensive review of 3D-CNN-based classification techniques for diseased and defective crops using non-UAV-based hyperspectral images.MITACSMaster of Science in Applied Computer Scienc
Deep Statistical Models with Application to Environmental Data
When analyzing environmental data, constructing a realistic statistical model is important, not only to fully characterize the physical phenomena, but also to provide valid and useful predictions. Gaussian process models are amongst the most popular tools used for this purpose. However, many assumptions are usually made when using Gaussian processes, such as stationarity of the covariance function. There are several approaches to construct nonstationary spatial and spatio-temporal Gaussian processes, including the deformation approach. In the deformation approach, the geographical domain is warped into a new domain, on which the Gaussian process is modeled to be stationary. One of the main challenges with this approach is how to construct a deformation function that is complicated enough to adequately capture the nonstationarity in the process, but simple enough to facilitate statistical inference and prediction. In this thesis, by using ideas from deep learning, we construct deformation functions that are compositions of simple warping units. In particular, deformation functions that are composed of aligning functions and warping functions are introduced to model nonstationary and asymmetric multivariate spatial processes, while spatial and temporal warping functions are used to model nonstationary spatio-temporal processes. Similarly to the traditional deformation approach, familiar stationary models are used on the warped domain. It is shown that this new approach to model nonstationarity is computationally efficient, and that it can lead to predictions that are superior to those from stationary models. We show the utility of these models on both simulated data and real-world environmental data: ocean temperatures and surface-ice elevation. The developed warped nonstationary processes can also be used for emulation. We show that a warped, gradient-enhanced Gaussian process surrogate model can be embedded in algorithms such as importance sampling and delayed-acceptance Markov chain Monte Carlo. Our surrogate models can provide more accurate emulation than other traditional surrogate models, and can help speed up Bayesian inference in problems with exponential-family likelihoods with intractable normalizing constants, for example when analyzing satellite images using the Potts model
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
On-premise containerized, light-weight software solutions for Biomedicine
Bioinformatics software systems are critical tools for analysing large-scale biological
data, but their design and implementation can be challenging due to the need for reliability, scalability, and performance. This thesis investigates the impact of several
software approaches on the design and implementation of bioinformatics software
systems. These approaches include software patterns, microservices, distributed
computing, containerisation and container orchestration. The research focuses on
understanding how these techniques affect bioinformatics software systemsâ reliability, scalability, performance, and efficiency. Furthermore, this research highlights
the challenges and considerations involved in their implementation. This study also
examines potential solutions for implementing container orchestration in bioinformatics research teams with limited resources and the challenges of using container
orchestration. Additionally, the thesis considers microservices and distributed computing and how these can be optimised in the design and implementation process to
enhance the productivity and performance of bioinformatics software systems. The
research was conducted using a combination of software development, experimentation, and evaluation. The results show that implementing software patterns can
significantly improve the code accessibility and structure of bioinformatics software
systems. Specifically, microservices and containerisation also enhanced system reliability, scalability, and performance. Additionally, the study indicates that adopting
advanced software engineering practices, such as model-driven design and container
orchestration, can facilitate efficient and productive deployment and management of
bioinformatics software systems, even for researchers with limited resources. Overall, we develop a software system integrating all our findings. Our proposed system
demonstrated the ability to address challenges in bioinformatics. The thesis makes
several key contributions in addressing the research questions surrounding the design,
implementation, and optimisation of bioinformatics software systems using software
patterns, microservices, containerisation, and advanced software engineering principles and practices. Our findings suggest that incorporating these technologies can
significantly improve bioinformatics software systemsâ reliability, scalability, performance, efficiency, and productivity.Bioinformatische Software-Systeme stellen bedeutende Werkzeuge fĂŒr die Analyse
umfangreicher biologischer Daten dar. Ihre Entwicklung und Implementierung kann
jedoch aufgrund der erforderlichen ZuverlÀssigkeit, Skalierbarkeit und LeistungsfÀhigkeit eine Herausforderung darstellen. Das Ziel dieser Arbeit ist es, die Auswirkungen von Software-Mustern, Microservices, verteilten Systemen, Containerisierung
und Container-Orchestrierung auf die Architektur und Implementierung von bioinformatischen Software-Systemen zu untersuchen. Die Forschung konzentriert sich
darauf, zu verstehen, wie sich diese Techniken auf die ZuverlÀssigkeit, Skalierbarkeit,
LeistungsfÀhigkeit und Effizienz von bioinformatischen Software-Systemen auswirken
und welche Herausforderungen mit ihrer Konzeptualisierungen und Implementierung
verbunden sind. Diese Arbeit untersucht auch potenzielle Lösungen zur Implementierung von Container-Orchestrierung in bioinformatischen Forschungsteams mit begrenzten Ressourcen und die EinschrĂ€nkungen bei deren Verwendung in diesem Kontext. Des Weiteren werden die SchlĂŒsselfaktoren, die den Erfolg von bioinformatischen Software-Systemen mit Containerisierung, Microservices und verteiltem Computing beeinflussen, untersucht und wie diese im Design- und Implementierungsprozess optimiert werden können, um die ProduktivitĂ€t und Leistung bioinformatischer
Software-Systeme zu steigern. Die vorliegende Arbeit wurde mittels einer Kombination aus Software-Entwicklung, Experimenten und Evaluation durchgefĂŒhrt. Die
erzielten Ergebnisse zeigen, dass die Implementierung von Software-Mustern, die ZuverlÀssigkeit und Skalierbarkeit von bioinformatischen Software-Systemen erheblich
verbessern kann. Der Einsatz von Microservices und Containerisierung trug ebenfalls zur Steigerung der ZuverlÀssigkeit, Skalierbarkeit und LeistungsfÀhigkeit des
Systems bei. DarĂŒber hinaus legt die Arbeit dar, dass die Anwendung von SoftwareEngineering-Praktiken, wie modellgesteuertem Design und Container-Orchestrierung,
die effiziente und produktive Bereitstellung und Verwaltung von bioinformatischen
Software-Systemen erleichtern kann. Zudem löst die Implementierung dieses SoftwareSystems, Herausforderungen fĂŒr Forschungsgruppen mit begrenzten Ressourcen. Insgesamt hat das System gezeigt, dass es in der Lage ist, Herausforderungen im Bereich
der Bioinformatik zu bewĂ€ltigen und stellt somit ein wertvolles Werkzeug fĂŒr Forscher in diesem Bereich dar. Die vorliegende Arbeit leistet mehrere wichtige BeitrĂ€ge
zur Beantwortung von Forschungsfragen im Zusammenhang mit dem Entwurf, der
Implementierung und der Optimierung von Software-Systemen fĂŒr die Bioinformatik unter Verwendung von Prinzipien und Praktiken der Softwaretechnik. Unsere
Ergebnisse deuten darauf hin, dass die Einbindung dieser Technologien die ZuverlÀssigkeit, Skalierbarkeit, LeistungsfÀhigkeit, Effizienz und ProduktivitÀt bioinformatischer Software-Systeme erheblich verbessern kann
- âŠ