128 research outputs found
Hierarchical reconstruction using geometry and sinogram restoration
"IP Editors' Information Classification Scheme (EDICS): 2.3."Includes bibliographical references (p. 30-32).Supported by the National Science Foundation. MIP-9015281 Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the U.S. Army Research Office. DAAL03-86-K-0171 Supported by a U.S. Army Research Office Fellowship.Jerry L. Prince and Alan S. Willsky
PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities
We present the PyHST2 code which is in service at ESRF for phase-contrast and
absorption tomography. This code has been engineered to sustain the high data
flow typical of the third generation synchrotron facilities (10 terabytes per
experiment) by adopting a distributed and pipelined architecture. The code
implements, beside a default filtered backprojection reconstruction, iterative
reconstruction techniques with a-priori knowledge. These latter are used to
improve the reconstruction quality or in order to reduce the required data
volume and reach a given quality goal. The implemented a-priori knowledge
techniques are based on the total variation penalisation and a new recently
found convex functional which is based on overlapping patches.
We give details of the different methods and their implementations while the
code is distributed under free license.
We provide methods for estimating, in the absence of ground-truth data, the
optimal parameters values for a-priori techniques
A Data-Driven Edge-Preserving D-bar Method for Electrical Impedance Tomography
In Electrical Impedance Tomography (EIT), the internal conductivity of a body
is recovered via current and voltage measurements taken at its surface. The
reconstruction task is a highly ill-posed nonlinear inverse problem, which is
very sensitive to noise, and requires the use of regularized solution methods,
of which D-bar is the only proven method. The resulting EIT images have low
spatial resolution due to smoothing caused by low-pass filtered regularization.
In many applications, such as medical imaging, it is known \emph{a priori} that
the target contains sharp features such as organ boundaries, as well as
approximate ranges for realistic conductivity values. In this paper, we use
this information in a new edge-preserving EIT algorithm, based on the original
D-bar method coupled with a deblurring flow stopped at a minimal data
discrepancy. The method makes heavy use of a novel data fidelity term based on
the so-called {\em CGO sinogram}. This nonlinear data step provides superior
robustness over traditional EIT data formats such as current-to-voltage
matrices or Dirichlet-to-Neumann operators, for commonly used current patterns.Comment: 24 pages, 11 figure
Revised consistency conditions for PET data
Proceeding of: 2007 IEEE Nuclear Science Symposium Conference Record (NSS'07), Honolulu, Hawaii, USA, Oct. 27 - Nov. 3, 2007Tomographic Data Consistency Conditions (TDCC) are frequently employed to improve the quality of PET data. However, most of these consistency conditions were derived from X-ray computerized tomography (CT) and their validity for other imaging modalities has not been well established. For instance, it is well known from (X-ray) CT data that the sum of the projection data from one view of the parallel-beam projections is a constant independent of the view-angle. This
consistency condition is based on well-known mathematical properties of the Radon transform and yields good results when employed in noise removal or sinogram restoration. But this consistency condition assumes that emission and detection of radiation occur within a thin (ideally with zero width) line-ofresponse (LOR), with a flat probability distribution of the
detection (in PET) or absorption (X-ray) along such LOR. This assumption, being valid for CT, is not realistic for PET acquisitions. Thus, TDCC for PET should be revised in order to check their validity with more realistic detection models.
In this work we review the main differences between PET and CT data and study whether these consistency conditions should
be modified in order to take into account the dependence of the probabilities on the distance to the center of the line-of-response.
Results from simulations are also presented to illustrate the importance of these effects. They indicate that some consistency
conditions can be violated at the 10% level.This work has been partially funded by UCM grant. Part of the computations of this work were done at the 'High capacity cluster for physical techniques' of the Faculty for Physical Sciences of the UCM, funded in part by the UE under the
FEDER program and in part by the UCM. This work has been partially funded by CD-TEAM, program CENIT, Ministerio de Industria, Spain
A hierarchical algorithm for limited-angle reconstruction
Caption title.Includes bibliographical references.Supported by the National Science Foundation. ECS-87-00903 Supported by the U.S. Army Research Office. DAAL03-86-K-0171Jerry L. Prince and Alan S. Willsky
Residual Back Projection With Untrained Neural Networks
Background and Objective: The success of neural networks in a number of image
processing tasks has motivated their application in image reconstruction
problems in computed tomography (CT). While progress has been made in this
area, the lack of stability and theoretical guarantees for accuracy, together
with the scarcity of high-quality training data for specific imaging domains
pose challenges for many CT applications. In this paper, we present a framework
for iterative reconstruction (IR) in CT that leverages the hierarchical
structure of neural networks, without the need for training. Our framework
incorporates this structural information as a deep image prior (DIP), and uses
a novel residual back projection (RBP) connection that forms the basis for our
iterations.
Methods: We propose using an untrained U-net in conjunction with a novel
residual back projection to minimize an objective function and achieve
high-accuracy reconstruction. In each iteration, the weights of the untrained
U-net are optimized, and the output of the U-net in the current iteration is
used to update the input of the U-net in the next iteration through the
aforementioned RBP connection.
Results: Experimental results demonstrate that the RBP-DIP framework offers
improvements over other state-of-the-art conventional IR methods, as well as
pre-trained and untrained models with similar network structures under multiple
conditions. These improvements are particularly significant in the few-view,
limited-angle, and low-dose imaging configurations.
Conclusions: Applying to both parallel and fan beam X-ray imaging, our
framework shows significant improvement under multiple conditions. Furthermore,
the proposed framework requires no training data and can be adjusted on-demand
to adapt to different conditions (e.g. noise level, geometry, and imaged
object)
Studies of Sensitivity in the Dictionary Learning Approach to Computed Tomography: Simplifying the Reconstruction Problem, Rotation, and Scale
In this report, we address the problem of low-dose tomographic image reconstruction using dictionary priors learned from training images. In our recent work [22] dictionary learning is used to incorporate priors from training images and construct a dictionary, and then the reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the subspace spanned by the dictionary. The work in [22] has shown that using learned dictionaries in computed tomography can lead to superior image reconstructions comparing to classical methods. Our formulation in [22] enforces that the solution is an exact representation by the dictionary; in this report, we investigate this requirement. Furthermore, the underlying assumption that the scale and orientation of the training images are consistent with the unknown image of interest may not be realistic. We investigate the sensitivity and robustness of the reconstruction to variations of the scale and orientation in the training images and we suggest algorithms to estimate the correct relative scale and orientation of the unknown image to the training images from the data
- …