151 research outputs found

    Soft Null Hypotheses: A Case Study of Image Enhancement Detection in Brain Lesions

    Get PDF
    This work is motivated by a study of a population of multiple sclerosis (MS) patients using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to identify active brain lesions. At each visit, a contrast agent is administered intravenously to a subject and a series of images is acquired to reveal the location and activity of MS lesions within the brain. Our goal is to identify and quantify lesion enhancement location at the subject level and lesion enhancement patterns at the population level. With this example, we aim to address the difficult problem of transforming a qualitative scientific null hypothesis, such as "this voxel does not enhance", to a well-defined and numerically testable null hypothesis based on existing data. We call the procedure "soft null hypothesis" testing as opposed to the standard "hard null hypothesis" testing. This problem is fundamentally different from: 1) testing when a quantitative null hypothesis is given; 2) clustering using a mixture distribution; or 3) identifying a reasonable threshold with a parametric null assumption. We analyze a total of 20 subjects scanned at 63 visits (~30Gb), the largest population of such clinical brain images

    Structured Functional Principal Component Analysis

    Get PDF
    Motivated by modern observational studies, we introduce a class of functional models that expands nested and crossed designs. These models account for the natural inheritance of correlation structure from sampling design in studies where the fundamental sampling unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for ultra-high dimensional data. Methods are illustrated in three examples: high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep

    Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators

    Full text link
    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often used for defining the network nodes in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw intervoxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. Our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets---a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute rsfMRI scans from 20 subjects---we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by up to 30%.Comment: body 21 pages, 11 figure

    Grain-size measurements in protoplanetary disks indicate fragile pebbles and low turbulence

    Full text link
    Recent laboratory experiments have revealed that destructive collisions of icy dust particles may occur at much lower velocities than previously believed. These low fragmentation velocities push down the maximum grain size in collisional growth models. Motivated by the smooth radial distribution of pebble sizes inferred from ALMA/VLA multi-wavelength continuum analysis, we propose a concise model to explain this feature and aim to constrain the turbulence level at the midplane of protoplanetary disks. Our approach is built on the assumption that the fragmentation threshold is the primary barrier limiting pebble growth within pressure maxima. Consequently, the grain size at the ring location can provide direct insights into the turbulent velocity governing pebble collisions and, by extension, the turbulence level at the disk midplane. We validate this method using the Dustpy code, which simulates dust transport and coagulation. We apply our method to 7 disks, TW Hya, IM Lup, GM Aur, AS 209, HL Tau, HD 163296, and MWC 480, for which grain sizes have been measured from multi-wavelength continuum analysis. A common feature emerges from our analysis, with an overall low turbulence coefficient of α∼10−4\alpha\sim10^{-4} observed in five out of seven disks when taking fragmentation velocity vfrag=1 m s−1v_{\rm frag} = 1{\rm \,m\,s}^{-1}. A higher fragmentation velocity would imply a turbulence coefficient significantly larger than the current observational constraints. IM Lup stands out with a relatively higher coefficient of 10−310^{-3}. Notably, HL Tau exhibits an increasing trend in α\alpha with distance, which supports enhanced turbulence at its outer disk region, possibly associated with the infalling streamer onto HL~Tau. The current (sub)mm pebble size constrained in disks implies low levels of turbulence, as well as fragile pebbles consistent with recent laboratory measurements.Comment: 11 pages, 8 figures, 2 tables, Accepted for publication in A&

    Simulations of Triple Microlensing Events I: Detectability of a scaled Sun-Jupiter-Saturn System

    Full text link
    Up to date, only 13 firmly established triple microlensing events have been discovered, so the occurrence rates of microlensing two-planet systems and planets in binary systems are still uncertain. With the upcoming space-based microlensing surveys, hundreds of triple microlensing events will be detected. To provide clues for future observations and statistical analyses, we initiate a project to investigate the detectability of triple-lens systems with different configurations and observational setups. As the first step, in this work we develop the simulation software and investigate the detectability of a scaled Sun-Jupiter-Saturn system with the recently proposed microlensing telescope of the ``Earth 2.0 (ET)'' mission. We find that the detectability of the scaled Sun-Jupiter-Saturn analog is about 1%. In addition, the presence of the Jovian planet suppresses the detectability of the Saturn-like planet by ∼\sim 13% regardless of the adopted detection Δχ2\Delta\chi^2 threshold. This suppression probability could be at the same level as the Poisson noise of future space-based statistical samples of triple-lenses, so it is inappropriate to treat each planet separately during detection efficiency calculations.Comment: 14 pages, 11 figures, submitted to MNRAS, comments welcome
    • …
    corecore