361 research outputs found
Robust spectrotemporal decomposition by iteratively reweighted least squares
Classical nonparametric spectral analysis uses sliding windows to capture the dynamic nature of most real-world time series. This universally accepted approach fails to exploit the temporal continuity in the data and is not well-suited for signals with highly structured time–frequency representations. For a time series whose time-varying mean is the superposition of a small number of oscillatory components, we formulate nonparametric batch spectral analysis as a Bayesian estimation problem. We introduce prior distributions on the time–frequency plane that yield maximum a posteriori (MAP) spectral estimates that are continuous in time yet sparse in frequency. Our spectral decomposition procedure, termed spectrotemporal pursuit, can be efficiently computed using an iteratively reweighted least-squares algorithm and scales well with typical data lengths. We show that spectrotemporal pursuit works by applying to the time series a set of data-derived filters. Using a link between Gaussian mixture models, ℓ[subscript 1] minimization, and the expectation–maximization algorithm, we prove that spectrotemporal pursuit converges to the global MAP estimate. We illustrate our technique on simulated and real human EEG data as well as on human neural spiking activity recorded during loss of consciousness induced by the anesthetic propofol. For the EEG data, our technique yields significantly denoised spectral estimates that have significantly higher time and frequency resolution than multitaper spectral estimates. For the neural spiking data, we obtain a new spectral representation of neuronal firing rates. Spectrotemporal pursuit offers a robust spectral decomposition framework that is a principled alternative to existing methods for decomposing time series into a small number of smooth oscillatory components.National Institutes of Health (U.S.) (Transformative Research Award GM 104948)National Institutes of Health (U.S.) (New Innovator Award R01-EB006385
Recommended from our members
Modern Statistical/Machine Learning Techniques for Bio/Neuro-imaging Applications
Developments in modern bio-imaging techniques have allowed the routine collection of a vast amount of data from various techniques. The challenges lie in how to build accurate and efficient models to draw conclusions from the data and facilitate scientific discoveries. Fortunately, recent advances in statistics, machine learning, and deep learning provide valuable tools. This thesis describes some of our efforts to build scalable Bayesian models for four bio-imaging applications: (1) Stochastic Optical Reconstruction Microscopy (STORM) Imaging, (2) particle tracking, (3) voltage smoothing, (4) detect color-labeled neurons in c elegans and assign identity to the detections
Quality and denoising in real-time functional magnetic resonance imaging neurofeedback: A methods review
First published: 25 April 2020Neurofeedback training using real-time functional magnetic resonance imaging
(rtfMRI-NF) allows subjects voluntary control of localised and distributed brain activity.
It has sparked increased interest as a promising non-invasive treatment option in
neuropsychiatric and neurocognitive disorders, although its efficacy and clinical significance
are yet to be determined. In this work, we present the first extensive review
of acquisition, processing and quality control methods available to improve the quality
of the neurofeedback signal. Furthermore, we investigate the state of denoising
and quality control practices in 128 recently published rtfMRI-NF studies. We found:
(a) that less than a third of the studies reported implementing standard real-time
fMRI denoising steps, (b) significant room for improvement with regards to methods
reporting and (c) the need for methodological studies quantifying and comparing the
contribution of denoising steps to the neurofeedback signal quality. Advances in
rtfMRI-NF research depend on reproducibility of methods and results. Notably, a systematic
effort is needed to build up evidence that disentangles the various mechanisms
influencing neurofeedback effects. To this end, we recommend that future
rtfMRI-NF studies: (a) report implementation of a set of standard real-time fMRI denoising
steps according to a proposed COBIDAS-style checklist (https://osf.io/kjwhf/),
(b) ensure the quality of the neurofeedback signal by calculating and reporting
community-informed quality metrics and applying offline control checks and (c) strive
to adopt transparent principles in the form of methods and data sharing and support
of open-source rtfMRI-NF software. Code and data for reproducibility, as well as an
interactive environment to explore the study data, can be accessed at https://github.
com/jsheunis/quality-and-denoising-in-rtfmri-nf.LSH‐TKI, Grant/Award Number: LSHM16053‐SGF; Philips Researc
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Recommended from our members
Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis
Computational neuroscience seeks to discover the underlying mechanisms by which neural activity is generated. With the recent advancement in neural data acquisition methods, the bottleneck of this pursuit is the analysis of ever-growing volume of neural data acquired in numerous labs from various experiments. These analyses can be broadly divided into two categories. First, extraction of high quality neuronal signals from noisy large scale recordings. Second, inference for statistical models aimed at explaining the neuronal signals and underlying processes that give rise to them. Conventionally, majority of the methodologies employed for this effort are based on statistics and signal processing. However, in recent years recruiting Artificial Neural Networks (ANN) for neural data analysis is gaining traction. This is due to their immense success in computer vision and natural language processing, and the stellar track record of ANN architectures generalizing to a wide variety of problems. In this work we investigate and improve upon statistical and ANN machine learning methods applied to multi-electrode array recordings and inference for dynamical systems that play critical roles in computational neuroscience.
In the first and second part of this thesis, we focus on spike sorting problem. The analysis of large-scale multi-neuronal spike train data is crucial for current and future of neuroscience research. However, this type of data is not available directly from recordings and require further processing to be converted into spike trains. Dense multi-electrode arrays (MEA) are standard methods for collecting such recordings. The processing needed to extract spike trains from these raw electrical signals is carried out by ``spike sorting'' algorithms. We introduce a robust and scalable MEA spike sorting pipeline YASS (Yet Another Spike Sorter) to address many challenges that are inherent to this task. We primarily pay attention to MEA data collected from the primate retina for important reasons such as the unique challenges and available side information that ultimately assist us in scoring different spike sorting pipelines. We also introduce a Neural Network architecture and an accompanying training scheme specifically devised to address the challenging task of deconvolution in MEA recordings.
In the last part, we shift our attention to inference for non-linear dynamics. Dynamical systems are the governing force behind many real world phenomena and temporally correlated data. Recently, a number of neural network architectures have been proposed to address inference for nonlinear dynamical systems. We introduce two different methods based on normalizing flows for posterior inference in latent non-linear dynamical systems. We also present gradient-based amortized posterior inference approaches using the auto-encoding variational Bayes framework that can be applied to a wide range of generative models with nonlinear dynamics. We call our method (FNF). FNF performs favorably against state-of-the-art inference methods in terms of accuracy of predictions and quality of uncovered codes and dynamics on synthetic data
- …