994 research outputs found

    Blind deconvolution of sparse pulse sequences under a minimum distance constraint: a partially collapsed Gibbs sampler method

    Get PDF
    For blind deconvolution of an unknown sparse sequence convolved with an unknown pulse, a powerful Bayesian method employs the Gibbs sampler in combination with a Bernoulli–Gaussian prior modeling sparsity. In this paper, we extend this method by introducing a minimum distance constraint for the pulses in the sequence. This is physically relevant in applications including layer detection, medical imaging, seismology, and multipath parameter estimation. We propose a Bayesian method for blind deconvolution that is based on a modified Bernoulli–Gaussian prior including a minimum distance constraint factor. The core of our method is a partially collapsed Gibbs sampler (PCGS) that tolerates and even exploits the strong local dependencies introduced by the minimum distance constraint. Simulation results demonstrate significant performance gains compared to a recently proposed PCGS. The main advantages of the minimum distance constraint are a substantial reduction of computational complexity and of the number of spurious components in the deconvolution result

    PSF Sampling in Fluorescence Image Deconvolution

    Get PDF
    All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise

    Maximum A Posteriori Deconvolution of Sparse Spike Trains

    Get PDF

    A new level-set-based protocol for accurate bone segmentation from CT imaging

    Get PDF
    A new medical image segmentation pipeline for accurate bone segmentation from computed tomography (CT) imaging is proposed in this paper. It is a two-step methodology, with a pre-segmentation step and a segmentation refinement step, as follows. First, the user performs a rough segmenting of the desired region of interest. Second, a fully automatic refinement step is applied to the pre-segmented data. The automatic segmentation refinement is composed of several sub-steps, namely, image deconvolution, image cropping, and interpolation. The user-defined pre-segmentation is then refined over the deconvolved, cropped, and up-sampled version of the image. The performance of the proposed algorithm is exemplified with the segmentation of CT images of a composite femur bone, reconstructed with different reconstruction protocols. Segmentation outcomes are validated against a gold standard model, obtained using the coordinate measuring machine Nikon Metris LK V20 with a digital line scanner LC60-D and a resolution of 28 ”m. High sub-pixel accuracy models are obtained for all tested data sets, with a maximum average deviation of 0.178 mm from the gold standard. The algorithm is able to produce high quality segmentation of the composite femur regardless of the surface meshing strategy used.The authors also would like to acknowledge Hospital CUF, Porto (Portugal), Clínica Dr. Campos Costa, Porto (Portugal), and ISQ, Instituto de Soldadura e Qualidade for all technical support provided during this work.info:eu-repo/semantics/publishedVersio
    • 

    corecore