4,411 research outputs found

    Automatic alignment for three-dimensional tomographic reconstruction

    Get PDF
    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Iterative CT reconstruction from few projections for the nondestructive post irradiation examination of nuclear fuel assemblies

    Get PDF
    The core components (e.g. fuel assemblies, spacer grids, control rods) of the nuclear reactors encounter harsh environment due to high temperature, physical stress, and a tremendous level of radiation. The integrity of these elements is crucial for safe operation of the nuclear power plants. The Post Irradiation Examination (PIE) can reveal information about the integrity of the elements during normal operations and off‐normal events. Computed tomography (CT) is a tool for evaluating the structural integrity of elements non-destructively. CT requires many projections to be acquired from different view angles after which a mathematical algorithm is adopted for reconstruction. Obtaining many projections is laborious and expensive in nuclear industries. Reconstructions from a small number of projections are explored to achieve faster and cost-efficient PIE. Classical reconstruction algorithms (e.g. filtered back projection) cannot offer stable reconstructions from few projections and create severe streaking artifacts. In this thesis, conventional algorithms are reviewed, and new algorithms are developed for reconstructions of the nuclear fuel assemblies using few projections. CT reconstruction from few projections falls into two categories: the sparse-view CT and the limited-angle CT or tomosynthesis. Iterative reconstruction algorithms are developed for both cases in the field of compressed sensing (CS). The performance of the algorithms is assessed using simulated projections and validated through real projections. The thesis also describes the systematic strategy towards establishing the conditions of reconstructions and finds the optimal imaging parameters for reconstructions of the fuel assemblies from few projections. --Abstract, page iii

    Data-proximal complementary ℓ1\ell^1-TV reconstruction for limited data CT

    Full text link
    In a number of tomographic applications, data cannot be fully acquired, resulting in a severely underdetermined image reconstruction. In such cases, conventional methods lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient at compensating for missing data and reducing reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. However, a particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction across multiple scales, in which case ℓ1\ell^1 curvelet regularization methods are well suited. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet-TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments

    MicroCT of Coronary Stents: Staining Techniques for 3-D Pathological Analysis

    Get PDF
    In the area of translational research, stent developers consult pathologists to obtain the best and most complete amount of data from implanted test devices in the most efficient manner. Through the use of micron-scale computed tomography along with post-fixation staining techniques in this study, full volumes of previously implanted stents have been analyzed in-situ in a non-destructive manner. The increased soft tissue contrast imparted by metal-containing stains allowed for a qualitative analysis of the vessel’s response to the implant with greater sensitivity and specificity while reducing beam-hardening artifact from stent struts. The developed staining techniques included iodine-potassium iodide, phosphomolybdic acid, and phosphotungstic acid, all of which bind to soft tissue and improve image quality through their ability to attenuate high energy X-rays. With these stains, the overall soft tissue contrast increased by up to 85 percent and contrast between medial and neointimal layers of the vessel increased by up to 22 percent. Beam hardening artifact was also reduced by up to 38 percent after staining. Acquiring data from the entirety of the stent and the surrounding tissue increased the quality of stent analysis in multiple ways. The three dimensional data enabled a comprehensive analysis of stent performance, lending information such as neointimal hyperplasia, percent stenosis, delineation of vessel wall layers, stent apposition, and stent fractures. By providing morphological data about stent deployment and host response, this method circumvents the need to make the more traditional histology slides for a morphometric analysis. These same data may also be applied to target regions of interest to ensure histology slides are cut from the optimal locations for a more in-depth analysis. The agents involved in such techniques are readily available in most pathology laboratories, are safe to work with, and allow for rapid processing of tissue. The ability to forego histology altogether or to highly focus what histology is performed on a vessel has the potential to hasten the development process of any coronary stent

    Mathematical Methods in Tomography

    Get PDF
    This is the seventh Oberwolfach conference on the mathematics of tomography, the first one taking place in 1980. Tomography is the most popular of a series of medical and scientific imaging techniques that have been developed since the mid seventies of the last century

    Combining Undersampled Dithered Images

    Get PDF
    Undersampled images, such as those produced by the HST WFPC-2, misrepresent fine-scale structure intrinsic to the astronomical sources being imaged. Analyzing such images is difficult on scales close to their resolution limits and may produce erroneous results. A set of ``dithered'' images of an astronomical source generally contains more information about its structure than any single undersampled image, however, and may permit reconstruction of a ``superimage'' with Nyquist sampling. I present a tutorial on a method of image reconstruction that builds a superimage from a complex linear combination of the Fourier transforms of a set of undersampled dithered images. This method works by algebraically eliminating the high order satellites in the periodic transforms of the aliased images. The reconstructed image is an exact representation of the data-set with no loss of resolution at the Nyquist scale. The algorithm is directly derived from the theoretical properties of aliased images and involves no arbitrary parameters, requiring only that the dithers are purely translational and constant in pixel-space over the domain of the object of interest. I show examples of its application to WFC and PC images. I argue for its use when the best recovery of point sources or morphological information at the HST diffraction limit is of interest.Comment: 22 pages, 9 EPS figures, submitted to PAS
    • 

    corecore