4,138 research outputs found

    Lose The Views: Limited Angle CT Reconstruction via Implicit Sinogram Completion

    Full text link
    Computed Tomography (CT) reconstruction is a fundamental component to a wide variety of applications ranging from security, to healthcare. The classical techniques require measuring projections, called sinograms, from a full 180∘^\circ view of the object. This is impractical in a limited angle scenario, when the viewing angle is less than 180∘^\circ, which can occur due to different factors including restrictions on scanning time, limited flexibility of scanner rotation, etc. The sinograms obtained as a result, cause existing techniques to produce highly artifact-laden reconstructions. In this paper, we propose to address this problem through implicit sinogram completion, on a challenging real world dataset containing scans of common checked-in luggage. We propose a system, consisting of 1D and 2D convolutional neural networks, that operates on a limited angle sinogram to directly produce the best estimate of a reconstruction. Next, we use the x-ray transform on this reconstruction to obtain a "completed" sinogram, as if it came from a full 180∘^\circ measurement. We feed this to standard analytical and iterative reconstruction techniques to obtain the final reconstruction. We show with extensive experimentation that this combined strategy outperforms many competitive baselines. We also propose a measure of confidence for the reconstruction that enables a practitioner to gauge the reliability of a prediction made by our network. We show that this measure is a strong indicator of quality as measured by the PSNR, while not requiring ground truth at test time. Finally, using a segmentation experiment, we show that our reconstruction preserves the 3D structure of objects effectively.Comment: Spotlight presentation at CVPR 201

    Regularized 4D-CT reconstruction from a single dataset with a spatio-temporal prior

    Get PDF
    X-ray Computerized Tomography (CT) reconstructions can be severely impaired by the patient’s respiratory motion and cardiac beating. Motion must thus be recovered in addition to the 3D reconstruction problem. The approach generally followed to reconstruct dynamic volumes consists of largely increasing the number of projections so that independent reconstructions be possible using only subsets of projections from the same phase of the cyclic movement. Apart from this major trend, motion compensation (MC) aims at recovering the object of interest and its motion by accurately modeling its deformation over time, allowing to use the whole dataset for 4D reconstruction in a coherent way.We consider a different approach for dynamic reconstruction based on inverse problems, without any additional measurements, nor explicit knowledge of the motion. The dynamic sequence is reconstructed out of a single data set, only assuming the motion’s continuity and periodicity. This inverse problem is solved by the minimization of the sum of a data-fidelity term, consistent with the dynamic nature of the data, and a regularization term which implements an efficient spatio-temporal version of the total variation (TV). We demonstrate the potential of this approach and its practical feasibility on 2D and 3D+t reconstructions of a mechanical phantom and patient data

    The Application of Tomographic Reconstruction Techniques to Ill-Conditioned Inverse Problems in Atmospheric Science and Biomedical Imaging

    Get PDF
    A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA’s Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near-infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the diffused signal, simulations must be incorporated into the algorithm, which describe the sporadic photon migration. The third paper presents a novel Monte Carlo technique derived from the optical scattering solution for spheroidal particles designed to mimic mitochondria and deformed cell nuclei. Simulated results of optical diffusion are presented. The potential for improving existing imaging modalities through continual development of sparse tomography and optical scattering methods is discussed

    3D exemplar-based image inpainting in electron microscopy

    Get PDF
    In electron microscopy (EM) a common problem is the non-availability of data, which causes artefacts in reconstructions. In this thesis the goal is to generate artificial data where missing in EM by using exemplar-based inpainting (EBI). We implement an accelerated 3D version tailored to applications in EM, which reduces reconstruction times from days to minutes. We develop intelligent sampling strategies to find optimal data as input for reconstruction methods. Further, we investigate approaches to reduce electron dose and acquisition time. Sparse sampling followed by inpainting is the most promising approach. As common evaluation measures may lead to misinterpretation of results in EM and falsify a subsequent analysis, we propose to use application driven metrics and demonstrate this in a segmentation task. A further application of our technique is the artificial generation of projections in tiltbased EM. EBI is used to generate missing projections, such that the full angular range is covered. Subsequent reconstructions are significantly enhanced in terms of resolution, which facilitates further analysis of samples. In conclusion, EBI proves promising when used as an additional data generation step to tackle the non-availability of data in EM, which is evaluated in selected applications. Enhancing adaptive sampling methods and refining EBI, especially considering the mutual influence, promotes higher throughput in EM using less electron dose while not lessening quality.Ein häufig vorkommendes Problem in der Elektronenmikroskopie (EM) ist die Nichtverfügbarkeit von Daten, was zu Artefakten in Rekonstruktionen führt. In dieser Arbeit ist es das Ziel fehlende Daten in der EM künstlich zu erzeugen, was durch Exemplar-basiertes Inpainting (EBI) realisiert wird. Wir implementieren eine auf EM zugeschnittene beschleunigte 3D Version, welche es ermöglicht, Rekonstruktionszeiten von Tagen auf Minuten zu reduzieren. Wir entwickeln intelligente Abtaststrategien, um optimale Datenpunkte für die Rekonstruktion zu erhalten. Ansätze zur Reduzierung von Elektronendosis und Aufnahmezeit werden untersucht. Unterabtastung gefolgt von Inpainting führt zu den besten Resultaten. Evaluationsmaße zur Beurteilung der Rekonstruktionsqualität helfen in der EM oft nicht und können zu falschen Schlüssen führen, weswegen anwendungsbasierte Metriken die bessere Wahl darstellen. Dies demonstrieren wir anhand eines Beispiels. Die künstliche Erzeugung von Projektionen in der neigungsbasierten Elektronentomographie ist eine weitere Anwendung. EBI wird verwendet um fehlende Projektionen zu generieren. Daraus resultierende Rekonstruktionen weisen eine deutlich erhöhte Auflösung auf. EBI ist ein vielversprechender Ansatz, um nicht verfügbare Daten in der EM zu generieren. Dies wird auf Basis verschiedener Anwendungen gezeigt und evaluiert. Adaptive Aufnahmestrategien und EBI können also zu einem höheren Durchsatz in der EM führen, ohne die Bildqualität merklich zu verschlechtern

    Markov random field image modelling

    Get PDF
    Includes bibliographical references.This work investigated some of the consequences of using a priori information in image processing using computer tomography (CT) as an example. Prior information is information about the solution that is known apart from measurement data. This information can be represented as a probability distribution. In order to define a probability density distribution in high dimensional problems like those found in image processing it becomes necessary to adopt some form of parametric model for the distribution. Markov random fields (MRFs) provide just such a vehicle for modelling the a priori distribution of labels found in images. In particular, this work investigated the suitability of MRF models for modelling a priori information about the distribution of attenuation coefficients found in CT scans

    A Parametric Model for the Analysis and Quantification of Foveal Shapes

    Get PDF
    Recently, the advance of OCT enables a detailed examination of the human retina in-vivo for clinical routine and experimental eye research. One of the structures inside the retina of immense scientific interest is the fovea, a small retinal pit located in the central region with extraordinary visual resolution. Today, only a few investigations captured foveal morphology based on a large subject group by a detailed analysis employing mathematical models. In this work, we develop a parametric model function to describe the shape of the human fovea. Starting with a detailed discussion on the history and present of fovea research, we define the requirements for a suitable model and derive a function which can represent a broad range of foveal shapes. The model is one-dimensional in its basic form and can only account for the shape of one particular section through a fovea. Therefore, we apply a radial fitting scheme in different directions which can capture a fovea in its full three-dimensional appearance. Highly relevant foveal characteristics, derived from the model, provide valuable descriptions to quantify the fovea and allow for a detailed analysis of different foveal shapes. To put the theoretical model into practice, we develop a numerical scheme to compute model parameters from retinal \ac{oct} scans and to reconstruct the shape of an entire fovea. For the sake of scientific reproducibility, this section includes implementation details, examples and a discussion of performance considerations. Finally, we present several studies which employed the fovea model successfully. A first feasibility study verifies that the parametric model is suitable for foveal shapes occurring in a large set of healthy human eyes. In a follow-up investigation, we analyse foveal characteristics occurring in healthy humans in detail. This analysis will concern with different aspects including, e.g. an investigation of the fovea's asymmetry, a gender comparison, a left versus right eye correlation and the computation of subjects with extreme foveal shapes. Furthermore, we will show how the model was used to support investigations unrelated to the direct quantification of the fovea itself. In these investigations we employed the model to compute anatomically correct regions of interest in an analysis of the OCB and the calculation of an average fovea for an optical simulation of light rays. We will conclude with currently unpublished data that shows the fovea modelling of hunting birds which have unusual, funnel-like foveal shapes

    Computed Tomography of Chemiluminescence: A 3D Time Resolved Sensor for Turbulent Combustion

    No full text
    Time resolved 3D measurements of turbulent flames are required to further understanding of combustion and support advanced simulation techniques (LES). Computed Tomography of Chemiluminescence (CTC) allows a flame’s 3D chemiluminescence profile to be obtained by inverting a series of integral measurements. CTC provides the instantaneous 3D flame structure, and can also measure: excited species concentrations, equivalence ratio, heat release rate, and possibly strain rate. High resolutions require simultaneous measurements from many view points, and the cost of multiple sensors has traditionally limited spatial resolutions. However, recent improvements in commodity cameras makes a high resolution CTC sensor possible and is investigated in this work. Using realistic LES Phantoms (known fields), the CT algorithm (ART) is shown to produce low error reconstructions even from limited noisy datasets. Error from selfabsorption is also tested using LES Phantoms and a modification to ART that successfully corrects this error is presented. A proof-of-concept experiment using 48 non-simultaneous views is performed and successfully resolves a Matrix Burner flame to 0.01% of the domain width (D). ART is also extended to 3D (without stacking) to allow 3D camera locations and optical effects to be considered. An optical integral geometry (weighted double-cone) is presented that corrects for limited depth-of-field, and (even with poorly estimated camera parameters) reconstructs the Matrix Burner as well as the standard geometry. CTC is implemented using five PicSight P32M cameras and mirrors to provide 10 simultaneous views. Measurements of the Matrix Burner and a Turbulent Opposed Jet achieve exposure times as low as 62 μs, with even shorter exposures possible. With only 10 views the spatial resolution of the reconstructions is low. However, a cosine Phantom study shows that 20–40 viewing angles are necessary to achieve high resolutions (0.01– 0.04D). With 40 P32M cameras costing £40000, future CTC implementations can achieve high spatial and temporal resolutions

    Separating Signal from Noise in High-Density Diffuse Optical Tomography

    Get PDF
    High-density diffuse optical tomography (HD-DOT) is a relatively new neuroimaging technique that detects the changes in hemoglobin concentrations following neuronal activity through the measurement of near-infrared light intensities. Thus, it has the potential to be a surrogate for functional MRI (fMRI) as a more naturalistic, portable, and cost-effective neuroimaging system. As in other neuroimaging modalities, head motion is the most common source of noise in HD-DOT data that results in spurious effects in the functional brain images. Unlike other neuroimaging modalities, data quality assessment methods are still underdeveloped for HD-DOT. Therefore, developing robust motion detection and motion removal methods in its data processing pipeline is a crucial step for making HD-DOT a reliable neuroimaging modality. In particular, our lab is interested in using HD-DOT to study the brain function in clinical populations with metal implants that cannot be studied using fMRI due to their contraindications. Two of these populations are patients having movement disorders (Parkinson Disease or essential tremor) with deep brain stimulation (DBS) implants and individuals with cochlear implants (CI). These two groups both receive tremendous benefit from their implants at the statistical level; however, there is significant single-subject variability. Our overarching goal is to use HD-DOT to find the relationships between the neuronal function and the behavioral measures in these populations to optimize the contact location of these implant surgeries. However, one of the challenges in analyzing the data in these subjects, especially in patients with DBS, is their high levels of motion due to tremors when their DBS implant is turned off. This further motivates the importance of the methods presented herein for separating signal from noise in HD-DOT data. To this end, I will first assess the efficacy of state-of-the-art motion correction methods introduced in the fNIRS literature for HD-DOT. Then, I will present a novel global metric inspired by motion detection methods in fMRI called GVTD (global variance of the temporal derivatives). Our results show that GVTD-based motion detection not only outperforms other comparable motion detection methods in fNIRS, but also outperforms motion detection with accelerometers. I will then present my work on collecting and processing HD-DOT data for two clinical populations with metal implants in their brain and the preliminary results for these studies. Our results in PD patients show that HD-DOT can reliably map neuronal activity in this group and replicate previously published results using PET and fMRI. Our results in the CI users provide evidence for the recruitment of the prefrontal cortex in processing speech to compensate for the decreased activity in the temporal cortex. These findings support the theory of cognitive demand increase in effortful listening situations. In summary, the presented methods for separating signal from noise enable direct comparisons of HD-DOT images with those of fMRI in clinical populations with metal implants and equip this modality to be used as a surrogate for fMRI
    • …
    corecore