12,884 research outputs found

    Learning Fourier-Constrained Diffusion Bridges for MRI Reconstruction

    Full text link
    Recent years have witnessed a surge in deep generative models for accelerated MRI reconstruction. Diffusion priors in particular have gained traction with their superior representational fidelity and diversity. Instead of the target transformation from undersampled to fully-sampled data, common diffusion priors are trained to learn a multi-step transformation from Gaussian noise onto fully-sampled data. During inference, data-fidelity projections are injected in between reverse diffusion steps to reach a compromise solution within the span of both the diffusion prior and the imaging operator. Unfortunately, suboptimal solutions can arise as the normality assumption of the diffusion prior causes divergence between learned and target transformations. To address this limitation, here we introduce the first diffusion bridge for accelerated MRI reconstruction. The proposed Fourier-constrained diffusion bridge (FDB) leverages a generalized process to transform between undersampled and fully-sampled data via random noise addition and random frequency removal as degradation operators. Unlike common diffusion priors that use an asymptotic endpoint based on Gaussian noise, FDB captures a transformation between finite endpoints where the initial endpoint is based on moderate degradation of fully-sampled data. Demonstrations on brain MRI indicate that FDB outperforms state-of-the-art reconstruction methods including conventional diffusion priors

    Image Segmentation Using Weak Shape Priors

    Full text link
    The problem of image segmentation is known to become particularly challenging in the case of partial occlusion of the object(s) of interest, background clutter, and the presence of strong noise. To overcome this problem, the present paper introduces a novel approach segmentation through the use of "weak" shape priors. Specifically, in the proposed method, an segmenting active contour is constrained to converge to a configuration at which its geometric parameters attain their empirical probability densities closely matching the corresponding model densities that are learned based on training samples. It is shown through numerical experiments that the proposed shape modeling can be regarded as "weak" in the sense that it minimally influences the segmentation, which is allowed to be dominated by data-related forces. On the other hand, the priors provide sufficient constraints to regularize the convergence of segmentation, while requiring substantially smaller training sets to yield less biased results as compared to the case of PCA-based regularization methods. The main advantages of the proposed technique over some existing alternatives is demonstrated in a series of experiments.Comment: 27 pages, 8 figure

    Quantifying Tensions between CMB and Distance Datasets in Models with Free Curvature or Lensing Amplitude

    Get PDF
    Recent measurements of the Cosmic Microwave Background (CMB) by the Planck Collaboration have produced arguably the most powerful observational evidence in support of the standard model of cosmology, i.e. the spatially flat Λ\LambdaCDM paradigm. In this work, we perform model selection tests to examine whether the base CMB temperature and large scale polarization anisotropy data from Planck 2015 (P15) prefer any of eight commonly used one-parameter model extensions with respect to flat Λ\LambdaCDM. We find a clear preference for models with free curvature, ΩK\Omega_\mathrm{K}, or free amplitude of the CMB lensing potential, ALA_\mathrm{L}. We also further develop statistical tools to measure tension between datasets. We use a Gaussianization scheme to compute tensions directly from the posterior samples using an entropy-based method, the surprise, as well as a calibrated evidence ratio presented here for the first time. We then proceed to investigate the consistency between the base P15~CMB data and six other CMB and distance datasets. In flat Λ\LambdaCDM we find a 4.8σ4.8\sigma tension between the base P15~CMB data and a distance ladder measurement, whereas the former are consistent with the other datasets. In the curved Λ\LambdaCDM model we find significant tensions in most of the cases, arising from the well-known low power of the low-ℓ\ell multipoles of the CMB data. In the flat Λ\LambdaCDM +AL+A_\mathrm{L} model, however, all datasets are consistent with the base P15~CMB observations except for the CMB lensing measurement, which remains in significant tension. This tension is driven by the increased power of the CMB lensing potential derived from the base P15~CMB constraints in both models, pointing at either potentially unresolved systematic effects or the need for new physics beyond the standard flat Λ\LambdaCDM model.Comment: 16 pages, 8 figures, 6 table

    Fully Bayesian Logistic Regression with Hyper-Lasso Priors for High-dimensional Feature Selection

    Full text link
    High-dimensional feature selection arises in many areas of modern science. For example, in genomic research we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal) from tens of thousands of genes that are active (expressed) in certain tissue cells. To this end, we wish to fit regression and classification models with a large number of features (also called variables, predictors). In the past decade, penalized likelihood methods for fitting regression models based on hyper-LASSO penalization have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) are still in lack of development in the literature. In this paper we introduce an MCMC (fully Bayesian) method for learning severely multi-modal posteriors of logistic regression models based on hyper-LASSO priors (non-convex penalties). Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework; we call our method Bayesian logistic regression with hyper-LASSO (BLRHL) priors. We have used simulation studies and real data analysis to demonstrate the superior performance of hyper-LASSO priors, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors.Comment: 33 pages. arXiv admin note: substantial text overlap with arXiv:1308.469
    • …
    corecore