17,468 research outputs found
Recommended from our members
The LONI QC System: A Semi-Automated, Web-Based and Freely-Available Environment for the Comprehensive Quality Control of Neuroimaging Data.
Quantifying, controlling, and monitoring image quality is an essential prerequisite for ensuring the validity and reproducibility of many types of neuroimaging data analyses. Implementation of quality control (QC) procedures is the key to ensuring that neuroimaging data are of high-quality and their validity in the subsequent analyses. We introduce the QC system of the Laboratory of Neuro Imaging (LONI): a web-based system featuring a workflow for the assessment of various modality and contrast brain imaging data. The design allows users to anonymously upload imaging data to the LONI-QC system. It then computes an exhaustive set of QC metrics which aids users to perform a standardized QC by generating a range of scalar and vector statistics. These procedures are performed in parallel using a large compute cluster. Finally, the system offers an automated QC procedure for structural MRI, which can flag each QC metric as being 'good' or 'bad.' Validation using various sets of data acquired from a single scanner and from multiple sites demonstrated the reproducibility of our QC metrics, and the sensitivity and specificity of the proposed Auto QC to 'bad' quality images in comparison to visual inspection. To the best of our knowledge, LONI-QC is the first online QC system that uniquely supports the variety of functionality where we compute numerous QC metrics and perform visual/automated image QC of multi-contrast and multi-modal brain imaging data. The LONI-QC system has been used to assess the quality of large neuroimaging datasets acquired as part of various multi-site studies such as the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) Study and the Alzheimer's Disease Neuroimaging Initiative (ADNI). LONI-QC's functionality is freely available to users worldwide and its adoption by imaging researchers is likely to contribute substantially to upholding high standards of brain image data quality and to implementing these standards across the neuroimaging community
Multiple Texture Boltzmann Machines
We assess the generative power of the mPoTmodel of [10] with tiled-convolutional weight sharing as a model for visual textures by specifically training on this task, evaluating model performance on texture synthesis and inpainting tasks using quantitative metrics. We also analyze the relative importance of the mean and covariance parts of the mPoT model by comparing its performance to those of its subcomponents, tiled-convolutional versions of the PoT/FoE and Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM). Our results suggest that while state-of-the-art or better performance can be achieved using the mPoT, similar performance can be achieved with the mean-only model. We then develop a model for multiple textures based on the GB-RBM, using a shared set of weights but texturespecific hidden unit biases. We show comparable performance of the multiple texture model to individually trained texture models.
Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network
Efficient and high-fidelity prior sampling and inversion for complex
geological media is still a largely unsolved challenge. Here, we use a deep
neural network of the variational autoencoder type to construct a parametric
low-dimensional base model parameterization of complex binary geological media.
For inversion purposes, it has the attractive feature that random draws from an
uncorrelated standard normal distribution yield model realizations with spatial
characteristics that are in agreement with the training set. In comparison with
the most commonly used parametric representations in probabilistic inversion,
we find that our dimensionality reduction (DR) approach outperforms principle
component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform
(DCT) DR techniques for unconditional geostatistical simulation of a
channelized prior model. For the considered examples, important compression
ratios (200 - 500) are achieved. Given that the construction of our
parameterization requires a training set of several tens of thousands of prior
model realizations, our DR approach is more suited for probabilistic (or
deterministic) inversion than for unconditional (or point-conditioned)
geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D
transient hydraulic tomography data are used to demonstrate the DR-based
inversion. For the 2D case study, the performance is superior compared to
current state-of-the-art multiple-point statistics inversion by sequential
geostatistical resampling (SGR). Inversion results for the 3D application are
also encouraging
Paraglide: Interactive Parameter Space Partitioning for Computer Simulations
In this paper we introduce paraglide, a visualization system designed for
interactive exploration of parameter spaces of multi-variate simulation models.
To get the right parameter configuration, model developers frequently have to
go back and forth between setting parameters and qualitatively judging the
outcomes of their model. During this process, they build up a grounded
understanding of the parameter effects in order to pick the right setting.
Current state-of-the-art tools and practices, however, fail to provide a
systematic way of exploring these parameter spaces, making informed decisions
about parameter settings a tedious and workload-intensive task. Paraglide
endeavors to overcome this shortcoming by assisting the sampling of the
parameter space and the discovery of qualitatively different model outcomes.
This results in a decomposition of the model parameter space into regions of
distinct behaviour. We developed paraglide in close collaboration with experts
from three different domains, who all were involved in developing new models
for their domain. We first analyzed current practices of six domain experts and
derived a set of design requirements, then engaged in a longitudinal
user-centered design process, and finally conducted three in-depth case studies
underlining the usefulness of our approach
Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification
Objective. The main goal of this work is to develop a model for multi-sensor
signals such as MEG or EEG signals, that accounts for the inter-trial
variability, suitable for corresponding binary classification problems. An
important constraint is that the model be simple enough to handle small size
and unbalanced datasets, as often encountered in BCI type experiments.
Approach. The method involves linear mixed effects statistical model, wavelet
transform and spatial filtering, and aims at the characterization of localized
discriminant features in multi-sensor signals. After discrete wavelet transform
and spatial filtering, a projection onto the relevant wavelet and spatial
channels subspaces is used for dimension reduction. The projected signals are
then decomposed as the sum of a signal of interest (i.e. discriminant) and
background noise, using a very simple Gaussian linear mixed model. Main
results. Thanks to the simplicity of the model, the corresponding parameter
estimation problem is simplified. Robust estimates of class-covariance matrices
are obtained from small sample sizes and an effective Bayes plug-in classifier
is derived. The approach is applied to the detection of error potentials in
multichannel EEG data, in a very unbalanced situation (detection of rare
events). Classification results prove the relevance of the proposed approach in
such a context. Significance. The combination of linear mixed model, wavelet
transform and spatial filtering for EEG classification is, to the best of our
knowledge, an original approach, which is proven to be effective. This paper
improves on earlier results on similar problems, and the three main ingredients
all play an important role
Statistical Software for State Space Methods
In this paper we review the state space approach to time series analysis and establish the notation that is adopted in this special volume of the Journal of Statistical Software. We first provide some background on the history of state space methods for the analysis of time series. This is followed by a concise overview of linear Gaussian state space analysis including the modelling framework and appropriate estimation methods. We discuss the important class of unobserved component models which incorporate a trend, a seasonal, a cycle, and fixed explanatory and intervention variables for the univariate and multivariate analysis of time series. We continue the discussion by presenting methods for the computation of different estimates for the unobserved state vector: filtering, prediction, and smoothing. Estimation approaches for the other parameters in the model are also considered. Next, we discuss how the estimation procedures can be used for constructing confidence intervals, detecting outlier observations and structural breaks, and testing model assumptions of residual independence, homoscedasticity, and normality. We then show how ARIMA and ARIMA components models fit in the state space framework to time series analysis. We also provide a basic introduction for non-Gaussian state space models. Finally, we present an overview of the software tools currently available for the analysis of time series with state space methods as they are discussed in the other contributions to this special volume.
Hand classification of fMRI ICA noise components
We present a practical "how-to" guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets
- …