1,105 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Undersampling reconstruction in parallel and single coil imaging with COMPaS -- COnvolutional Magnetic Resonance Image Prior with Sparsity regularization

    Full text link
    Purpose: To propose COMPaS, a learning-free Convolutional Network, that combines Deep Image Prior (DIP) with transform-domain sparsity constraints to reconstruct undersampled Magnetic Resonance Imaging (MRI) data without previous training of the network. Methods: COMPaS uses a U-Net as DIP for undersampledMRdata in the image domain. Reconstruction is constrained by data fidelity to k-space measurements and transform-domain sparsity, such as Total Variation (TV) or Wavelet transform sparsity. Two-dimensional MRI data from the public FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates (R) from R = 2 to R = 8 for single coil and multicoil data. Performance of the proposed architecture was compared to Parallel Imaging with Compressed Sensing (PICS). Results: COMPaS outperforms standard PICS algorithms by reducing ghosting artifacts and yielding higher quantitative reconstruction quality metrics in multicoil imaging settings and especially in single coil k-space reconstruction. Furthermore, COMPaS can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Conclusion: COMPaS utilizes a training-free convolutional network as a DIP in MRI reconstruction and transforms it with transform-domain sparsity regularization. It is a competitive algorithm for parallel imaging and a novel tool for accelerating single coil MRI.Comment: 13 pages, 8 figures, 2 table

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system
    • …
    corecore