73 research outputs found

    Sparse MRI and CT Reconstruction

    Full text link
    Sparse signal reconstruction is of the utmost importance for efficient medical imaging, conducting accurate screening for security and inspection, and for non-destructive testing. The sparsity of the signal is dictated by either feasibility, or the cost and the screening time constraints of the system. In this work, two major sparse signal reconstruction systems such as compressed sensing magnetic resonance imaging (MRI) and sparse-view computed tomography (CT) are investigated. For medical CT, a limited number of views (sparse-view) is an option for whether reducing the amount of ionizing radiation or the screening time and the cost of the procedure. In applications such as non-destructive testing or inspection of large objects, like a cargo container, one angular view can take up to a few minutes for only one slice. On the other hand, some views can be unavailable due to the configuration of the system. A problem of data sufficiency and on how to estimate a tomographic image when the projection data are not ideally sufficient for precise reconstruction is one of two major objectives of this work. Three CT reconstruction methods are proposed: algebraic iterative reconstruction-reprojection (AIRR), sparse-view CT reconstruction based on curvelet and total variation regularization (CTV), and sparse-view CT reconstruction based on nonconvex L1-L2 regularization. The experimental results confirm a high performance based on subjective and objective quality metrics. Additionally, sparse-view neutron-photon tomography is studied based on Monte-Carlo modelling to demonstrate shape reconstruction, material discrimination and visualization based on the proposed 3D object reconstruction method and material discrimination signatures. One of the methods for efficient acquisition of multidimensional signals is the compressed sensing (CS). A significantly low number of measurements can be obtained in different ways, and one is undersampling, that is sampling below the Shannon-Nyquist limit. Magnetic resonance imaging (MRI) suffers inherently from its slow data acquisition. The compressed sensing MRI (CSMRI) offers significant scan time reduction with advantages for patients and health care economics. In this work, three frameworks are proposed and evaluated, i.e., CSMRI based on curvelet transform and total generalized variation (CT-TGV), CSMRI using curvelet sparsity and nonlocal total variation: CS-NLTV, CSMRI that explores shearlet sparsity and nonlocal total variation: SS-NLTV. The proposed methods are evaluated experimentally and compared to the previously reported state-of-the-art methods. Results demonstrate a significant improvement of image reconstruction quality on different medical MRI datasets

    High-Order Sparsity Exploiting Methods with Applications in Imaging and PDEs

    Get PDF
    abstract: High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on â„“1\ell^1 regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and â„“1\ell^1 regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.Dissertation/ThesisDoctoral Dissertation Applied Mathematics 201

    Sparse and Redundant Representations for Inverse Problems and Recognition

    Get PDF
    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    Doctor of Philosophy

    Get PDF
    dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection

    Recent Techniques for Regularization in Partial Differential Equations and Imaging

    Get PDF
    abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain. This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges. Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.Dissertation/ThesisDoctoral Dissertation Mathematics 201

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Compressive Sensing and Imaging Applications

    Get PDF
    Compressive sensing (CS) is a new sampling theory which allows reconstructing signals using sub-Nyquist measurements. It states that a signal can be recovered exactly from randomly undersampled data points if the signal exhibits sparsity in some transform domain (wavelet, Fourier, etc). Instead of measuring it uniformly in a local scheme, signal is correlated with a series of sensing waveforms. These waveforms are so called sensing matrix or measurement matrix. Every measurement is a linear combination of randomly picked signal components. By applying a nonlinear convex optimization algorithm, the original can be recovered. Therefore, signal acquisition and compression are realized simultaneously and the amount of information to be processed is considerably reduced. Due to its unique sensing and reconstruction mechanism, CS creates a new situation in signal acquisition hardware design as well as software development, to handle the increasing pressure on imaging sensors for sensing modalities beyond visible (ultraviolet, infrared, terahertz etc.) and algorithms to accommodate demands for higher-dimensional datasets (hyperspectral or video data cubes). The combination of CS with traditional optical imaging extends the capabilities and also improves the performance of existing equipments and systems. Our research work is focused on the direct application of compressive sensing for imaging in both 2D and 3D cases, such as infrared imaging, hyperspectral imaging and sum frequency generation microscopy. Data acquisition and compression are combined into one step. The computational complexity is passed to the receiving end, which always contains sufficient computer processing power. The sensing stage requirement is pushed to the simplest and cheapest level. In short, simple optical engine structure, robust measuring method and high speed acquisition make compressive sensing-based imaging system a strong competitor to the traditional one. These applications have and will benefit our lives in a deeper and wider way
    • …
    corecore