71,100 research outputs found

    Mitigation of artifacts due to isolated acoustic heterogeneities in photoacoustic computed tomography using a variable data truncation-based reconstruction method

    Get PDF
    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. If the object possesses spatially variant acoustic properties that are unaccounted for by the reconstruction method, the estimated image can contain distortions. While reconstruction methods have recently been developed to compensate for this effect, they generally require the object's acoustic properties to be known a priori. To circumvent the need for detailed information regarding an object's acoustic properties, we previously proposed a half-time reconstruction method for PACT. A half-time reconstruction method estimates the PACT image from a data set that has been temporally truncated to exclude the data components that have been strongly aberrated. However, this method can be improved upon when the approximate sizes and locations of isolated heterogeneous structures, such as bones or gas pockets, are known. To address this, we investigate PACT reconstruction methods that are based on a variable data truncation (VDT) approach. The VDT approach represents a generalization of the half-time approach, in which the degree of temporal truncation for each measurement is determined by the distance between the corresponding ultrasonic transducer location and the nearest known bone or gas void location. Computer-simulated and experimental data are employed to demonstrate the effectiveness of the approach in mitigating artifacts due to acoustic heterogeneities

    Muon Track Reconstruction and Data Selection Techniques in AMANDA

    Full text link
    The Antarctic Muon And Neutrino Detector Array (AMANDA) is a high-energy neutrino telescope operating at the geographic South Pole. It is a lattice of photo-multiplier tubes buried deep in the polar ice between 1500m and 2000m. The primary goal of this detector is to discover astrophysical sources of high energy neutrinos. A high-energy muon neutrino coming through the earth from the Northern Hemisphere can be identified by the secondary muon moving upward through the detector. The muon tracks are reconstructed with a maximum likelihood method. It models the arrival times and amplitudes of Cherenkov photons registered by the photo-multipliers. This paper describes the different methods of reconstruction, which have been successfully implemented within AMANDA. Strategies for optimizing the reconstruction performance and rejecting background are presented. For a typical analysis procedure the direction of tracks are reconstructed with about 2 degree accuracy.Comment: 40 pages, 16 Postscript figures, uses elsart.st

    Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography

    Full text link
    Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications

    3D Face Reconstruction by Learning from Synthetic Data

    Full text link
    Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.Comment: The first two authors contributed equally to this wor

    GPU-based Iterative Cone Beam CT Reconstruction Using Tight Frame Regularization

    Full text link
    X-ray imaging dose from serial cone-beam CT (CBCT) scans raises a clinical concern in most image guided radiation therapy procedures. It is the goal of this paper to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. For this purpose, we have developed an iterative tight frame (TF) based CBCT reconstruction algorithm. A condition that a real CBCT image has a sparse representation under a TF basis is imposed in the iteration process as regularization to the solution. To speed up the computation, a multi-grid method is employed. Our GPU implementation has achieved high computational efficiency and a CBCT image of resolution 512\times512\times70 can be reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom and a physical Catphan phantom. It is found that our TF-based algorithm is able to reconstrct CBCT in the context of undersampling and low mAs levels. We have also quantitatively analyzed the reconstructed CBCT image quality in terms of modulation-transfer-function and contrast-to-noise ratio under various scanning conditions. The results confirm the high CBCT image quality obtained from our TF algorithm. Moreover, our algorithm has also been validated in a real clinical context using a head-and-neck patient case. Comparisons of the developed TF algorithm and the current state-of-the-art TV algorithm have also been made in various cases studied in terms of reconstructed image quality and computation efficiency.Comment: 24 pages, 8 figures, accepted by Phys. Med. Bio

    FastJet user manual

    Get PDF
    FastJet is a C++ package that provides a broad range of jet finding and analysis tools. It includes efficient native implementations of all widely used 2-to-1 sequential recombination jet algorithms for pp and e+e- collisions, as well as access to 3rd party jet algorithms through a plugin mechanism, including all currently used cone algorithms. FastJet also provides means to facilitate the manipulation of jet substructure, including some common boosted heavy-object taggers, as well as tools for estimation of pileup and underlying-event noise levels, determination of jet areas and subtraction or suppression of noise in jets.Comment: 69 pages. FastJet 3 is available from http://fastjet.fr

    Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    Full text link
    PURPOSE: We develop a practical, iterative algorithm for image-reconstruction in under-sampled tomographic systems, such as digital breast tomosynthesis (DBT). METHOD: The algorithm controls image regularity by minimizing the image total pp-variation (TpV), a function that reduces to the total variation when p=1.0p=1.0 or the image roughness when p=2.0p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets (POCS). The fact that the tomographic system is under-sampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) reduction of the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in under-sampled tomography. RESULTS: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. CONCLUSION: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.Comment: Submitted to Medical Physic
    corecore