42 research outputs found
Recursive Estimation of User Intent from Noninvasive Electroencephalography using Discriminative Models
We study the problem of inferring user intent from noninvasive
electroencephalography (EEG) to restore communication for people with severe
speech and physical impairments (SSPI). The focus of this work is improving the
estimation of posterior symbol probabilities in a typing task. At each
iteration of the typing procedure, a subset of symbols is chosen for the next
query based on the current probability estimate. Evidence about the user's
response is collected from event-related potentials (ERP) in order to update
symbol probabilities, until one symbol exceeds a predefined confidence
threshold. We provide a graphical model describing this task, and derive a
recursive Bayesian update rule based on a discriminative probability over label
vectors for each query, which we approximate using a neural network classifier.
We evaluate the proposed method in a simulated typing task and show that it
outperforms previous approaches based on generative modeling.Comment: 5 pages, 2 figure
Stabilizing Subject Transfer in EEG Classification with Divergence Estimation
Classification models for electroencephalogram (EEG) data show a large
decrease in performance when evaluated on unseen test sub jects. We reduce this
performance decrease using new regularization techniques during model training.
We propose several graphical models to describe an EEG classification task.
From each model, we identify statistical relationships that should hold true in
an idealized training scenario (with infinite data and a globally-optimal
model) but that may not hold in practice. We design regularization penalties to
enforce these relationships in two stages. First, we identify suitable proxy
quantities (divergences such as Mutual Information and Wasserstein-1) that can
be used to measure statistical independence and dependence relationships.
Second, we provide algorithms to efficiently estimate these quantities during
training using secondary neural network models. We conduct extensive
computational experiments using a large benchmark EEG dataset, comparing our
proposed techniques with a baseline method that uses an adversarial classifier.
We find our proposed methods significantly increase balanced accuracy on test
subjects and decrease overfitting. The proposed methods exhibit a larger
benefit over a greater range of hyperparameters than the baseline method, with
only a small computational cost at training time. These benefits are largest
when used for a fixed training period, though there is still a significant
benefit for a subset of hyperparameters when our techniques are used in
conjunction with early stopping regularization.Comment: 16 pages, 5 figure
Fast and Expressive Gesture Recognition using a Combination-Homomorphic Electromyogram Encoder
We study the task of gesture recognition from electromyography (EMG), with
the goal of enabling expressive human-computer interaction at high accuracy,
while minimizing the time required for new subjects to provide calibration
data. To fulfill these goals, we define combination gestures consisting of a
direction component and a modifier component. New subjects only demonstrate the
single component gestures and we seek to extrapolate from these to all possible
single or combination gestures. We extrapolate to unseen combination gestures
by combining the feature vectors of real single gestures to produce synthetic
training data. This strategy allows us to provide a large and flexible gesture
vocabulary, while not requiring new subjects to demonstrate combinatorially
many example gestures. We pre-train an encoder and a combination operator using
self-supervision, so that we can produce useful synthetic training data for
unseen test subjects. To evaluate the proposed method, we collect a real-world
EMG dataset, and measure the effect of augmented supervision against two
baselines: a partially-supervised model trained with only single gesture data
from the unseen subject, and a fully-supervised model trained with real single
and real combination gesture data from the unseen subject. We find that the
proposed method provides a dramatic improvement over the partially-supervised
model, and achieves a useful classification accuracy that in some cases
approaches the performance of the fully-supervised model.Comment: 24 pages, 7 figures, 6 tables V2: add link to code, fix bibliograph
User Training with Error Augmentation for Electromyogram-based Gesture Classification
We designed and tested a system for real-time control of a user interface by
extracting surface electromyographic (sEMG) activity from eight electrodes in a
wrist-band configuration. sEMG data were streamed into a machine-learning
algorithm that classified hand gestures in real-time. After an initial model
calibration, participants were presented with one of three types of feedback
during a human-learning stage: veridical feedback, in which predicted
probabilities from the gesture classification algorithm were displayed without
alteration, modified feedback, in which we applied a hidden augmentation of
error to these probabilities, and no feedback. User performance was then
evaluated in a series of minigames, in which subjects were required to use
eight gestures to manipulate their game avatar to complete a task. Experimental
results indicated that, relative to baseline, the modified feedback condition
led to significantly improved accuracy and improved gesture class separation.
These findings suggest that real-time feedback in a gamified user interface
with manipulation of feedback may enable intuitive, rapid, and accurate task
acquisition for sEMG-based gesture recognition applications.Comment: 10 pages, 10 figure
All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins
All-optical electrophysiology—spatially resolved simultaneous optical perturbation and measurement of membrane voltage—would open new vistas in neuroscience research. We evolved two archaerhodopsin-based voltage indicators, QuasAr1 and QuasAr2, which show improved brightness and voltage sensitivity, have microsecond response times and produce no photocurrent. We engineered a channelrhodopsin actuator, CheRiff, which shows high light sensitivity and rapid kinetics and is spectrally orthogonal to the QuasArs. A coexpression vector, Optopatch, enabled cross-talk–free genetically targeted all-optical electrophysiology. In cultured rat neurons, we combined Optopatch with patterned optical excitation to probe back-propagating action potentials (APs) in dendritic spines, synaptic transmission, subcellular microsecond-timescale details of AP propagation, and simultaneous firing of many neurons in a network. Optopatch measurements revealed homeostatic tuning of intrinsic excitability in human stem cell–derived neurons. In rat brain slices, Optopatch induced and reported APs and subthreshold events with high signal-to-noise ratios. The Optopatch platform enables high-throughput, spatially resolved electrophysiology without the use of conventional electrodes
Recommended from our members
Next-Generation Roadmap for Patient-Centered Genomics
In the era of precision medicine, understanding genetic variation has grown from a topic of research interest into a tangible source of therapeutic benefit for patients. As the list of confirmed links between genetic lesions and disease continues to grow, so does the list of actionable genetic diagnoses.
The workup of a childhood-onset schizophrenia case provides a useful foil for discussion of current methods for genomic diagnostics, both to demonstrate some of the important available analyses, and to highlight areas of ongoing need. In brief, the stages of this case as pertains to the general diagnostic process are: clinical workup, sequencing and technical processing, analysis and interpretation of results, and follow-up research study.
The patient in this case presented with command auditory hallucinations at age 6 and began empirical treatment for schizophrenia; he was subsequently found to have a novel de novo heterozygous missense mutation in ATP1A3 NM_152296.4 c.385G>A, predicted to cause the coding change p.V129M. This gene codes for a neuron-specific isoform of the alpha subunit of the sodium-potassium pump complex that helps establish transmembrane ion gradients necessary for neuronal function. The variant found in this case is now being replicated in a patient-derived iPS-neuron model to seek greater insight into the mechanism of disease and possible therapeutic opportunities.
Generalizing from this case, researchers and clinicians hoping to replicate or improve upon this patient-centric genomics workflow can benefit from reviewing technical and infrastructural best practices. This case may also help illustrate some of the key difficulties in connecting genomic evidence with appropriate functional validation and other clinical markers to support well-informed decision-making
AutoTransfer: Subject Transfer Learning with Censored Representations on Biosignals Data
We provide a regularization framework for subject transfer learning in which
we seek to train an encoder and classifier to minimize classification loss,
subject to a penalty measuring independence between the latent representation
and the subject label. We introduce three notions of independence and
corresponding penalty terms using mutual information or divergence as a proxy
for independence. For each penalty term, we provide several concrete estimation
algorithms, using analytic methods as well as neural critic functions. We
provide a hands-off strategy for applying this diverse family of regularization
algorithms to a new dataset, which we call "AutoTransfer". We evaluate the
performance of these individual regularization strategies and our AutoTransfer
method on EEG, EMG, and ECoG datasets, showing that these approaches can
improve subject transfer learning for challenging real-world datasets.Comment: 17 page extended version of International Engineering in Medicine and
Biology Conference 2022 pape
CAMBI-tech/alpha-attenuation: Initial release
<p>Initial release of code accompanying "Target-Related Alpha Attenuation in a Brain-Computer Interface Rapid Serial Visual Presentation Calibration".</p>
EMG from Combination Gestures with Ground-truth Joystick Labels
<p>Dataset of surface EMG recordings from 11 subjects performing single and combination gestures, from "**A Multi-label Classification Approach to Increase Expressivity of EMG-based Gesture Recognition**" by Niklas Smedemark-Margulies, Yunus Bicer, Elifnur Sunger, Stephanie Naufel, Tales Imbiriba, Eugene Tunik, Deniz Erdogmus, and Mathew Yarossi.</p>
<p>For more details and example usage, see the following:</p>
<ul>
<li>Paper pdf - <a href="https://arxiv.org/pdf/2309.12217.pdf">https://arxiv.org/pdf/2309.12217.pdf</a></li>
<li>Experiment code - <a href="https://github.com/neu-spiral/multi-label-emg">https://github.com/neu-spiral/multi-label-emg</a></li>
</ul>
<h1>Contents</h1>
<p>Dataset of single and combination gestures from 11 subjects. <br>Subjects participated in 13 experimental blocks.<br>During each block, they followed visual prompts to perform gestures while also manipulating a joystick.<br>Surface EMG was recorded from 8 electrodes on the forearm; labels were recorded according to the current visual prompt and the current state of the joystick.</p>
<p>Experiments included the following blocks:</p>
<ul>
<li>1 Calibration block</li>
<li>6 Simultaneous-Pulse Combination blocks (3 without feedback, 3 with feedback)</li>
<li>6 Hold-Pulse Combination blocks (3 without feedback, 3 with feedback)</li>
</ul>
<p>The contents of each block type were as follows:</p>
<ul>
<li>In the Calibration block, subjects performed 8 repetitions of each of the 4 direction gestures, 2 modifier gestures, and a resting pose.<br>Each Calibration trial provided 160 overlapping examples, for a total of: 8 repetitions x 7 gestures x 160 examples = 8960 examples.</li>
<li>In Simultaneous-Pulse Combination blocks, subjects performed 8 trials of combination gestures, where both components were performed simultaneously.<br>Each Simultaneous-Pulse trial provided 240 overlapping examples, for a total of: 8 trials x 240 examples = 1920 examples.</li>
<li>In Hold-Pulse Combination blocks, subjects performed 28 trials of combination gestures, where 1 gesture component was held while the other was pulsed.<br>Each Hold-Pulse trial provided 240 overlapping examples, for a total of: 28 trials x 240 examples = 6720 examples.</li>
</ul>
<p>A single data example (from any block) corresponds a window 250ms of EMG recorded at 1926Hz (built-in 20–450 Hz bandpass filtering applied).<br>A 50ms step size was used between each window; note that neighboring data examples are therefore overlapping.</p>
<p>Feedback was provided as follows:</p>
<ul>
<li>In blocks with feedback, a model pre-trained on the Calibration data was used to give realtime visual feedback during the trial.</li>
<li>In blocks without feedback, no model was used, and the visual prompt was the only source of information about the current gesture.</li>
</ul>
<p>For more details, see the paper.</p>
<h1>Labels</h1>
<p>Two types of labels are provided: </p>
<ul>
<li>joystick labels were recorded based on the position of the joystick, and are treated as ground-truth.</li>
<li>visual labels were also recorded based on what prompt was currently being shown to the subject.</li>
</ul>
<p>For both joystick and visual labels, the following structure applies. Each gesture trial has a two-part label.</p>
<p>The first label component describes the direction gesture, and takes values in {0, 1, 2, 3, 4}, with the following meaning:</p>
<ul>
<li>0 - "Up" (joystick pull)</li>
<li>1 - "Down" (joystick push)</li>
<li>2 - "Left" (joystick left)</li>
<li>3 - "Right" (joystick right)</li>
<li>4 - "NoDirection" (absence of a direction gesture; none of the above)</li>
</ul>
<p>The second label component describes the modifier gesture, and takes values in {0, 1, 2}, with the following meaning:</p>
<ul>
<li>0 - "Pinch" (joystick trigger button)</li>
<li>1 - "Thumb" (joystick thumb button)</li>
<li>2 - "NoModifier" (absence of a modifier gesture; none of the above)</li>
</ul>
<p>## Examples of Label Structure</p>
<p>Single gestures have labels like (0, 2) indicating ("Up", "NoModifier") or (4, 1) indicating ("NoDirection", "Thumb").</p>
<p>Combination gesture have labels like (0, 0) indicating ("Up", "Pinch") or (2, 1) indicating ("Left", "Thumb").</p>
<h1>File layout</h1>
<p>Data are provided in Numpy and MATLAB format. Descriptions below apply for both.</p>
<p>Each experimental block is provided in a separate folder.<br>Within one experimental block, the following files are provided:</p>
<ul>
<li>`data.npy` - Raw EMG data, with shape (items, channels, timesteps).</li>
<li>`joystick_direction_labels.npy` - one-hot joystick direction labels, with shape (items, 5).</li>
<li>`joystick_modifier_labels.npy` - one-hot joystick modifier labels, with shape (items, 3).</li>
<li>`visual_direction_labels.npy` - one-hot visual direction labels, with shape (items, 5).</li>
<li>`visual_modifier_labels.npy` - one-hot visual modifier labels, with shape (items, 3).</li>
</ul>
<h1>Loading data</h1>
<p>For example code snippets for loading data, see the associated code repository.</p>