220 research outputs found

    Bayesian Functional Data Analysis Using WinBUGS

    Get PDF
    We provide user friendly software for Bayesian analysis of functional data models using \pkg{WinBUGS}~1.4. The excellent properties of Bayesian analysis in this context are due to: (1) dimensionality reduction, which leads to low dimensional projection bases; (2) mixed model representation of functional models, which provides a modular approach to model extension; and (3) orthogonality of the principal component bases, which contributes to excellent chain convergence and mixing properties. Our paper provides one more, essential, reason for using Bayesian analysis for functional models: the existence of software.

    Neuroconductor: an R platform for medical imaging analysis

    Get PDF
    Neuroconductor (https://neuroconductor.org) is an open-source platform for rapid testing and dissemination of reproducible computational imaging software. The goals of the project are to: (i) provide a centralized repository of R software dedicated to image analysis, (ii) disseminate software updates quickly, (iii) train a large, diverse community of scientists using detailed tutorials and short courses, (iv) increase software quality via automatic and manual quality controls, and (v) promote reproducibility of image data analysis. Based on the programming language R (https://www.r-project.org/), Neuroconductor starts with 51 inter-operable packages that cover multiple areas of imaging including visualization, data processing and storage, and statistical inference. Neuroconductor accepts new R package submissions, which are subject to a formal review and continuous automated testing. We provide a description of the purpose of Neuroconductor and the user and developer experience

    Bayesian Functional Data Analysis Using WinBUGS

    Get PDF
    We provide user friendly software for Bayesian analysis of functional data models using pkg{WinBUGS}~1.4. The excellent properties of Bayesian analysis in this context are due to: (1) dimensionality reduction, which leads to low dimensional projection bases; (2) mixed model representation of functional models, which provides a modular approach to model extension; and (3) orthogonality of the principal component bases, which contributes to excellent chain convergence and mixing properties. Our paper provides one more, essential, reason for using Bayesian analysis for functional models: the existence of software

    Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators

    Full text link
    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often used for defining the network nodes in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw intervoxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. Our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets---a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute rsfMRI scans from 20 subjects---we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by up to 30%.Comment: body 21 pages, 11 figure

    AN OVERVIEW OF OBSERVATIONAL SLEEP RESEARCH WITH APPLICATION TO SLEEP STAGE TRANSITIONING

    Get PDF
    In this manuscript we give an overview of observational sleep research with a particular emphasis on sleep stage transitions. Sleep states represent a categorization of sleep electroencephalogram behavior over the night. We postulate that the rate of transitioning between sleep states is an important predictor of health. This claim is evaluated by comparing subjects with sleep disordered breathing to matched controls

    MOVELETS: A DICTIONARY OF MOVEMENT

    Get PDF
    Recent technological advances provide researchers a way of gathering real-time information on an individualā€™s movement through the use of wearable devices that record acceleration. In this paper, we propose a method for identifying activity types, like walking, standing, and resting, from acceleration data. Our approach decomposes movements into short components called ā€œmoveletsā€, and builds a reference for each activity type. Unknown activities are predicted by matching new movelets to the reference. We apply our method to data collected from a single, three-axis accelerometer and focus on activities of interest in studying physical function in elderly populations. An important technical advantage of our methods is that they allow identification of short activities, such as taking two or three steps and then stopping, as well as low frequency rare activities, such as sitting on a chair. Based on our results we provide simple and actionable recommendations for the design and implementation of large epidemiological studies that could collect accelerometry data for the purpose of predicting the time series of activities and connecting it to health outcomes

    OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRIā˜†

    Get PDF
    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images

    Statistical normalization techniques for magnetic resonance imagingā˜†ā˜†ā˜†

    Get PDF
    While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers

    Normalization Techniques for Statistical Inference from Magnetic Resonance Imaging

    Get PDF
    While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer\u27s Disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers
    • ā€¦
    corecore