51 research outputs found
Recommended from our members
Real-time decoding of question-and-answer speech dialogue using human cortical activity.
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate
Inferring Population Dynamics in Macaque Cortex
The proliferation of multi-unit cortical recordings over the last two
decades, especially in macaques and during motor-control tasks, has generated
interest in neural "population dynamics": the time evolution of neural activity
across a group of neurons working together. A good model of these dynamics
should be able to infer the activity of unobserved neurons within the same
population and of the observed neurons at future times. Accordingly,
Pandarinath and colleagues have introduced a benchmark to evaluate models on
these two (and related) criteria: four data sets, each consisting of firing
rates from a population of neurons, recorded from macaque cortex during
movement-related tasks. Here we show that simple, general-purpose architectures
based on recurrent neural networks (RNNs) outperform more "bespoke" models, and
indeed outperform all published models on all four data sets in the benchmark.
Performance can be improved further still with a novel, hybrid architecture
that augments the RNN with self-attention, as in transformer networks. But pure
transformer models fail to achieve this level of performance, either in our
work or that of other groups. We argue that the autoregressive bias imposed by
RNNs is critical for achieving the highest levels of performance. We conclude,
however, by proposing that the benchmark be augmented with an alternative
evaluation of latent dynamics that favors generative over discriminative models
like the ones we propose in this report.Comment: 23 pages, 10 figures, 4 table
Heparin Induces Harmless Fibril Formation in Amyloidogenic W7FW14F Apomyoglobin and Amyloid Aggregation in Wild-Type Protein In Vitro
Glycosaminoglycans (GAGs) are frequently associated with amyloid deposits in most amyloid diseases, and there is evidence to support their active role in amyloid fibril formation. The purpose of this study was to obtain structural insight into GAG-protein interactions and to better elucidate the molecular mechanism underlying the effect of GAGs on the amyloid aggregation process and on the related cytotoxicity. To this aim, using Fourier transform infrared and circular diochroism spectroscopy, electron microscopy and thioflavin fluorescence dye we examined the effect of heparin and other GAGs on the fibrillogenesis and cytotoxicity of aggregates formed by the amyloidogenic W7FW14 apomyoglobin mutant. Although this protein is unrelated to human disease, it is a suitable model for in vitro studies because it forms amyloid-like fibrils under physiological conditions of pH and temperature. Heparin strongly stimulated aggregation into amyloid fibrils, thereby abolishing the lag-phase normally detected following the kinetics of the process, and increasing the yield of fibrils. Moreover, the protein aggregates were harmless when assayed for cytotoxicity in vitro. Neutral or positive compounds did not affect the aggregation rate, and the early aggregates were highly cytotoxic. The surprising result that heparin induced amyloid fibril formation in wild-type apomyoglobin and in the partially folded intermediate state of the mutant, i.e., proteins that normally do not show any tendency to aggregate, suggested that the interaction of heparin with apomyoglobin is highly specific because of the presence, in protein turn regions, of consensus sequences consisting of alternating basic and non-basic residues that are capable of binding heparin molecules. Our data suggest that GAGs play a dual role in amyloidosis, namely, they promote beneficial fibril formation, but they also function as pathological chaperones by inducing amyloid aggregation
Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study
Purpose:
Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom.
Methods:
Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded.
Results:
The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia.
Conclusion:
We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes
Learning to Estimate Dynamical State with Probabilistic Population Codes.
Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states
Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology
General Description. This dataset consists of:
The threshold crossing times of extracellularly and simultaneously recorded spikes, sorted into units (up to five, including a "hash" unit), along with sorted waveform snippets, and,
The x,y position of the fingertip of the reaching hand and the x,y position of reaching targets (both sampled at 250 Hz).
The behavioral task was to make self-paced reaches to targets arranged in a grid (e.g. 8x8) without gaps or pre-movement delay intervals. One monkey reached with the right arm (recordings made in the left hemisphere); The other reached with the left arm (right hemisphere). In some sessions recordings were made from both M1 and S1 arrays (192 channels); in most sessions M1 recordings were made alone (96 channels).
Data from two primate subjects are included: 37 sessions from monkey 1 ("Indy", spanning about 10 months) and 10 sessions from monkey 2 ("Loco", spanning about 1 month), for a total of ~ 20,000 reaches and 6,500 reaches from monkeys 1 and 2, respectively.
Possible uses. These data are ideal for training BCI decoders, in particular because they are not segmented into trials. We expect that the dataset will be valuable for researchers who wish to design improved models of sensorimotor cortical spiking or provide an equal footing for comparing different BCI decoders. Other uses could include analyses of the statistics of arm kinematics, spike noise-correlations or signal-correlations, or for exploring the stability or variability of extracellular recording over sessions.
Variable names. Each file contains data in the following format. In the below, n refers to the number of recording channels, u refers to the number of sorted units, and k refers to the number of samples.
chan_names - n x 1
A cell array of channel identifier strings, e.g. "M1 001".
cursor_pos - k x 2
The position of the cursor in Cartesian coordinates (x, y), mm.
finger_pos - k x 3 or k x 6
The position of the working fingertip in Cartesian coordinates (z, -x, -y), as reported by the hand tracker in cm. Thus the cursor position is an affine transformation of fingertip position using the following matrix:
Note that for some sessions finger_pos includes the orientation of the sensor as well; the full state is thus: (z, -x, -y, azimuth, elevation, roll).
target_pos - k x 2
The position of the target in Cartesian coordinates (x, y), mm.
t - k x 1
The timestamp corresponding to each sample of the cursor_pos, finger_pos, and target_pos, seconds.
spikes - n x u
A cell array of spike event vectors. Each element in the cell array is a vector of spike event timestamps, in seconds. The first unit (u1) is the "unsorted" unit, meaning it contains the threshold crossings which remained after the spikes on that channel were sorted into other units (u2, u3, etc.) For some sessions spikes were sorted into up to 2 units (i.e. u=3); for others, 4 units (u=5).
wf - n x u
A cell array of spike event waveform "snippets". Each element in the cell array is a matrix of spike event waveforms. Each waveform corresponds to a timestamp in "spikes". Waveform samples are in microvolts.
Videos. For some sessions, we recorded screencasts of the stimulus presentation display using a dedicated hardware video grabber. These screencasts are thus a faithful representation of the stimuli and feedback presented to the monkey and are available for the following sessions:
indy_20160921_01
indy_20160930_02
indy_20160930_05
indy_20161005_06
indy_20161006_02
indy_20161007_02
indy_20161011_03
indy_20161013_03
indy_20161014_04
indy_20161017_02
Supplements. The raw broadband neural recordings that the spike trains in this dataset were extracted from are available for the following sessions:
indy_20160622_01: doi:10.5281/zenodo.1488440
indy_20160624_03: doi:10.5281/zenodo.1486147
indy_20160627_01: doi:10.5281/zenodo.1484824
indy_20160630_01: doi:10.5281/zenodo.1473703
indy_20160915_01: doi:10.5281/zenodo.1467953
indy_20160916_01: doi:10.5281/zenodo.1467050
indy_20160921_01: doi:10.5281/zenodo.1451793
indy_20160927_04: doi:10.5281/zenodo.1433942
indy_20160927_06: doi:10.5281/zenodo.1432818
indy_20160930_02: doi:10.5281/zenodo.1421880
indy_20160930_05: doi:10.5281/zenodo.1421310
indy_20161005_06: doi:10.5281/zenodo.1419774
indy_20161006_02: doi:10.5281/zenodo.1419172
indy_20161007_02: doi:10.5281/zenodo.1413592
indy_20161011_03: doi:10.5281/zenodo.1412635
indy_20161013_03: doi:10.5281/zenodo.1412094
indy_20161014_04: doi:10.5281/zenodo.1411978
indy_20161017_02: doi:10.5281/zenodo.1411882
indy_20161024_03: doi:10.5281/zenodo.1411474
indy_20161025_04: doi:10.5281/zenodo.1410423
indy_20161026_03: doi:10.5281/zenodo.1321264
indy_20161027_03: doi:10.5281/zenodo.1321256
indy_20161206_02: doi:10.5281/zenodo.1303720
indy_20161207_02: doi:10.5281/zenodo.1302866
indy_20161212_02: doi:10.5281/zenodo.1302832
indy_20161220_02: doi:10.5281/zenodo.1301045
indy_20170123_02: doi:10.5281/zenodo.1167965
indy_20170124_01: doi:10.5281/zenodo.1163026
indy_20170127_03: doi:10.5281/zenodo.1161225
indy_20170131_02: doi:10.5281/zenodo.854733
Contact Information. We would be delighted to hear from you if you find this dataset valuable, especially if it leads to publication. Corresponding author: J. E. O'Doherty <[email protected]>.
Publications making use of this dataset.
Makin, J. G., O'Doherty, J. E., Cardoso, M. M. B. & Sabes, P. N. (2018). Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. J Neural Eng 15(2): 026010. doi:10.1088/1741-2552/aa9e95
Ahmadi, N., Constandinou, T. G., & Bouganis, C. S. (2018). Spike Rate Estimation Using Bayesian Adaptive Kernel Smoother (BAKS) and Its Application to Brain Machine Interfaces. 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 2018, pp. 2547-2550. doi:10.1109/EMBC.2018.8512830
Balasubramanian, M., Ruiz, T., Cook, B., Bhattacharyya, S., Prabhat, Shrivastava, A. & Bouchard K. (2018). Optimizing the Union of Intersections LASSO (UoILASSO) and Vector Autoregressive (UoIVAR) Algorithms for Improved Statistical Estimation at Scale. arXiv preprint arXiv:1808.06992
Ahmadi, N., Constandinou, T. G., & Bouganis, C. S. (2019). Decoding Hand Kinematics from Local Field Potentials Using Long Short-Term Memory (LSTM) Network. arXiv preprint arXiv:1901.00708
Clark, D. G., Livezey, J. A., & Bouchard, K. E. (2019). Unsupervised Discovery of Temporal Structure in Noisy Data with Dynamical Components Analysis. arXiv preprint arXiv:1905.0994
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
<div><p>Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.</p></div
- …