8,108 research outputs found
GPU-based Iterative Cone Beam CT Reconstruction Using Tight Frame Regularization
X-ray imaging dose from serial cone-beam CT (CBCT) scans raises a clinical
concern in most image guided radiation therapy procedures. It is the goal of
this paper to develop a fast GPU-based algorithm to reconstruct high quality
CBCT images from undersampled and noisy projection data so as to lower the
imaging dose. For this purpose, we have developed an iterative tight frame (TF)
based CBCT reconstruction algorithm. A condition that a real CBCT image has a
sparse representation under a TF basis is imposed in the iteration process as
regularization to the solution. To speed up the computation, a multi-grid
method is employed. Our GPU implementation has achieved high computational
efficiency and a CBCT image of resolution 512\times512\times70 can be
reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom
and a physical Catphan phantom. It is found that our TF-based algorithm is able
to reconstrct CBCT in the context of undersampling and low mAs levels. We have
also quantitatively analyzed the reconstructed CBCT image quality in terms of
modulation-transfer-function and contrast-to-noise ratio under various scanning
conditions. The results confirm the high CBCT image quality obtained from our
TF algorithm. Moreover, our algorithm has also been validated in a real
clinical context using a head-and-neck patient case. Comparisons of the
developed TF algorithm and the current state-of-the-art TV algorithm have also
been made in various cases studied in terms of reconstructed image quality and
computation efficiency.Comment: 24 pages, 8 figures, accepted by Phys. Med. Bio
High-accuracy sub-pixel motion estimation from noisy images in Fourier domain
In this paper, we propose a new method for estimating subpixel motion via exploiting the principle of phase correlation in the Fourier domain. The method is based on linear weighting of the height of the main peak on the one hand and the difference between its two neighboring side-peaks on the other. Using both synthetic and real data we show that the proposed method outperforms many established approaches and achieves improved accuracy even in the presence of noisy samples
Statistical performance analysis of a fast super-resolution technique using noisy translations
It is well known that the registration process is a key step for
super-resolution reconstruction. In this work, we propose to use a
piezoelectric system that is easily adaptable on all microscopes and telescopes
for controlling accurately their motion (down to nanometers) and therefore
acquiring multiple images of the same scene at different controlled positions.
Then a fast super-resolution algorithm \cite{eh01} can be used for efficient
super-resolution reconstruction. In this case, the optimal use of images
for a resolution enhancement factor is generally not enough to obtain
satisfying results due to the random inaccuracy of the positioning system. Thus
we propose to take several images around each reference position. We study the
error produced by the super-resolution algorithm due to spatial uncertainty as
a function of the number of images per position. We obtain a lower bound on the
number of images that is necessary to ensure a given error upper bound with
probability higher than some desired confidence level.Comment: 15 pages, submitte
- …