528 research outputs found
Classification of Cognitive Load and Expertise for Adaptive Simulation using Deep Multitask Learning
Simulations are a pedagogical means of enabling a risk-free way for
healthcare practitioners to learn, maintain, or enhance their knowledge and
skills. Such simulations should provide an optimum amount of cognitive load to
the learner and be tailored to their levels of expertise. However, most current
simulations are a one-type-fits-all tool used to train different learners
regardless of their existing skills, expertise, and ability to handle cognitive
load. To address this problem, we propose an end-to-end framework for a trauma
simulation that actively classifies a participant's level of cognitive load and
expertise for the development of a dynamically adaptive simulation. To
facilitate this solution, trauma simulations were developed for the collection
of electrocardiogram (ECG) signals of both novice and expert practitioners. A
multitask deep neural network was developed to utilize this data and classify
high and low cognitive load, as well as expert and novice participants. A
leave-one-subject-out (LOSO) validation was used to evaluate the effectiveness
of our model, achieving an accuracy of 89.4% and 96.6% for classification of
cognitive load and expertise, respectively.Comment: 2019 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Self-supervised Learning for ECG-based Emotion Recognition
We present an electrocardiogram (ECG) -based emotion recognition system using
self-supervised learning. Our proposed architecture consists of two main
networks, a signal transformation recognition network and an emotion
recognition network. First, unlabelled data are used to successfully train the
former network to detect specific pre-determined signal transformations in the
self-supervised learning step. Next, the weights of the convolutional layers of
this network are transferred to the emotion recognition network, and two dense
layers are trained in order to classify arousal and valence scores. We show
that our self-supervised approach helps the model learn the ECG feature
manifold required for emotion recognition, performing equal or better than the
fully-supervised version of the model. Our proposed method outperforms the
state-of-the-art in ECG-based emotion recognition with two publicly available
datasets, SWELL and AMIGOS. Further analysis highlights the advantage of our
self-supervised approach in requiring significantly less data to achieve
acceptable results.Comment: Accepted, 45th IEEE International Conference on Acoustics, Speech,
and Signal Processin
Learn Piano with BACh: An Adaptive Learning Interface that Adjusts Task Difficulty based on Brain State
We present Brain Automated Chorales (BACh), an adaptive brain-computer system that dynamically increases the levels of difficulty in a musical learning task based on pianists\u27 cognitive workload measured by functional near-infrared spectroscopy. As users\u27 cognitive workload fell below a certain threshold, suggesting that they had mastered the material and could handle more cognitive information, BACh automatically increased the difficulty of the learning task. We found that learners played with significantly increased accuracy and speed in the brain-based adaptive task compared to our control condition. Participant feedback indicated that they felt they learned better with BACh and they liked the timings of the level changes. The underlying premise of BACh can be applied to learning situations where a task can be broken down into increasing levels of difficulty
Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies
NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations
Multimodal Brain-Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines
Through this paper, we introduce a novel driver cognitive load assessment
dataset, CL-Drive, which contains Electroencephalogram (EEG) signals along with
other physiological signals such as Electrocardiography (ECG) and Electrodermal
Activity (EDA) as well as eye tracking data. The data was collected from 21
subjects while driving in an immersive vehicle simulator, in various driving
conditions, to induce different levels of cognitive load in the subjects. The
tasks consisted of 9 complexity levels for 3 minutes each. Each driver reported
their subjective cognitive load every 10 seconds throughout the experiment. The
dataset contains the subjective cognitive load recorded as ground truth. In
this paper, we also provide benchmark classification results for different
machine learning and deep learning models for both binary and ternary label
distributions. We followed 2 evaluation criteria namely 10-fold and
leave-one-subject-out (LOSO). We have trained our models on both hand-crafted
features as well as on raw data.Comment: 13 pages, 8 figures, 11 tables. This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notic
Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 314)
This bibliography lists 139 reports, articles, and other documents introduced into the NASA scientific and technical information system in August, 1988
Enhancing Neuromorphic Computing with Advanced Spiking Neural Network Architectures
This dissertation proposes ways to address current limitations of neuromorphic computing to create energy-efficient and adaptable systems for AI applications. It does so by designing novel spiking neural networks architectures that improve their performance. Specifically, the two proposed architectures address the issues of training complexity, hyperparameter selection, computational flexibility, and scarcity of neuromorphic training data. The first architecture uses auxiliary learning to improve training performance and data usage, while the second architecture leverages neuromodulation capability of spiking neurons to improve multitasking classification performance. The proposed architectures are tested on Intel\u27s Loihi2 neuromorphic chip using several neuromorphic datasets, such as NMIST, DVSCIFAR10, and DVS128-Gesture. The presented results demonstrate potential of the proposed architectures but also reveal some of their limitations which are proposed as future research
Mejora de computación neuromórfica con arquitecturas avanzadas de redes neuronales por impulsos
La computación neuromórfica (NC, del inglés neuromorphic computing) pretende revolucionar el campo de la inteligencia artificial. Implica diseñar e implementar sistemas electrónicos que simulen el comportamiento de las neuronas biológicas utilizando hardware especializado, como matrices de puertas programables en campo (FPGA, del ingl´es field-programmable gate array) o chips neuromórficos dedicados [1, 2]. NC está diseñado para ser altamente eficiente, optimizado para bajo consumo de energía y alto paralelismo [3]. Estos sistemas son adaptables a entornos cambiantes y pueden aprender durante la operación, lo que los hace muy adecuados para resolver problemas dinámicos e impredecibles [4].
Sin embargo, el uso de NC para resolver problemas de la vida real actualmente está limitado porque el rendimiento de las redes neuronales por impulsos (SNN), las redes neuronales empleadas en NC, no es tan alta como el de los sistemas de computación tradicionales, como los alcanzados en dispositivos de aprendizaje profundo especializado, en términos de precisión y velocidad de aprendizaje [5, 6]. Varias razones contribuyen a la brecha de rendimiento: los SNN son más difíciles de entrenar debido a que necesitan algoritmos de entrenamiento especializados [7, 8]; son más sensibles a hiperparámetros, ya que son sistemas dinámicos con interacciones complejas [9], requieren conjuntos de datos especializados (datos neuromórficos) que
actualmente son escasos y de tamaño limitado [10], y el rango de funciones que los SNN pueden aproximar es más limitado en comparación con las redes neuronales artificiales (ANN) tradicionales [11]. Antes de que NC pueda tener un impacto más significativo en la IA y la tecnología informática, es necesario abordar estos desafíos relacionados con los SNN.This dissertation addresses current limitations of neuromorphic computing to
create energy-efficient and adaptable artificial intelligence systems. It focuses on increasing utilization of neuromorphic computing by designing novel architectures that improve the performance of the spiking neural networks. Specifically, the architectures address the issues of training complexity, hyperparameter selection, computational flexibility, and scarcity of training data. The first proposed architecture utilizes auxiliary learning to improve training performance and data usage, while the second architecture leverages neuromodulation capability of spiking neurons to improve multitasking classification performance. The proposed architectures are tested on the Intel’s Loihi2 neuromorphic computer using several neuromorphic data sets, such as NMIST, DVSCIFAR10, and DVS128-Gesture. Results presented in this dissertation demonstrate the potential of the proposed architectures, but also reveal some limitations that are proposed as future work
Detection of Maternal and Fetal Stress from the Electrocardiogram with Self-Supervised Representation Learning
In the pregnant mother and her fetus, chronic prenatal stress results in
entrainment of the fetal heartbeat by the maternal heartbeat, quantified by the
fetal stress index (FSI). Deep learning (DL) is capable of pattern detection in
complex medical data with high accuracy in noisy real-life environments, but
little is known about DL's utility in non-invasive biometric monitoring during
pregnancy. A recently established self-supervised learning (SSL) approach to DL
provides emotional recognition from electrocardiogram (ECG). We hypothesized
that SSL will identify chronically stressed mother-fetus dyads from the raw
maternal abdominal electrocardiograms (aECG), containing fetal and maternal
ECG. Chronically stressed mothers and controls matched at enrolment at 32 weeks
of gestation were studied. We validated the chronic stress exposure by
psychological inventory, maternal hair cortisol and FSI. We tested two variants
of SSL architecture, one trained on the generic ECG features for emotional
recognition obtained from public datasets and another transfer-learned on a
subset of our data. Our DL models accurately detect the chronic stress exposure
group (AUROC=0.982+/-0.002), the individual psychological stress score
(R2=0.943+/-0.009) and FSI at 34 weeks of gestation (R2=0.946+/-0.013), as well
as the maternal hair cortisol at birth reflecting chronic stress exposure
(0.931+/-0.006). The best performance was achieved with the DL model trained on
the public dataset and using maternal ECG alone. The present DL approach
provides a novel source of physiological insights into complex multi-modal
relationships between different regulatory systems exposed to chronic stress.
The final DL model can be deployed in low-cost regular ECG biosensors as a
simple, ubiquitous early stress detection and monitoring tool during pregnancy.
This discovery should enable early behavioral interventions.Comment: ClinicalTrials.gov registration number: NCT03389178. Code repo:
https://code.engineering.queensu.ca/17ps21/ssl-ecg-v
Exploring the effects of robotic design on learning and neural control
The ongoing deep learning revolution has allowed computers to outclass humans
in various games and perceive features imperceptible to humans during
classification tasks. Current machine learning techniques have clearly
distinguished themselves in specialized tasks. However, we have yet to see
robots capable of performing multiple tasks at an expert level. Most work in
this field is focused on the development of more sophisticated learning
algorithms for a robot's controller given a largely static and presupposed
robotic design. By focusing on the development of robotic bodies, rather than
neural controllers, I have discovered that robots can be designed such that
they overcome many of the current pitfalls encountered by neural controllers in
multitask settings. Through this discovery, I also present novel metrics to
explicitly measure the learning ability of a robotic design and its resistance
to common problems such as catastrophic interference.
Traditionally, the physical robot design requires human engineers to plan
every aspect of the system, which is expensive and often relies on human
intuition. In contrast, within the field of evolutionary robotics, evolutionary
algorithms are used to automatically create optimized designs, however, such
designs are often still limited in their ability to perform in a multitask
setting. The metrics created and presented here give a novel path to automated
design that allow evolved robots to synergize with their controller to improve
the computational efficiency of their learning while overcoming catastrophic
interference.
Overall, this dissertation intimates the ability to automatically design
robots that are more general purpose than current robots and that can perform
various tasks while requiring less computation.Comment: arXiv admin note: text overlap with arXiv:2008.0639
- …