2 research outputs found

    Statistical Performance Evaluation, System Modeling, Distributed Computation, and Signal Pattern Matching for a Compton Medical Imaging System.

    Full text link
    Radionuclide cancer therapy requires imaging radiotracers that concentrate in tumors and emit high energy charged particles that kill tumor cells. These tracers, such as 131I, generally emit high energy photons that need to be imaged to estimate tumor dose and changes in size during treatment. This research describes the performance of a dual-planar silicon-based Compton imaging system and compares it to a conventional parallel-hole collimated Anger camera with high energy general purpose lead collimator for imaging photons emitted from 131I. The collimated Anger camera imposes a tradeoff between resolution and sensitivity due to the mechanical collimation. As the energy of photons exceed 364keV, increased septal penetration and scattering further degrade the imaging performance. Simulations of the Anger camera and the Compton imaging system demonstrate a 20-fold advantage in detection efficiency and higher spatial resolution for detecting high energy photons by the Compton camera since it decouples the tradeoff. The system performance and comparision are analyzed using the modified uniform Cramer-Rao bound algorithms we developed along with the Monte Carlo calculations and system modeling. The bound show that the effect of Doppler broadening is the limiting factor for Compton camera performance for imaging 364keV photons. Performance of the two systems was compared and analyzed by simulating a 2D disk with uniform activities. For the case in which the two imaging systems detected the same number of events, the proposed Compton imaging system has lower image variance than the Anger camera with HEGP when the FWHM of the desired point source response is less than 1.2 cm. This advantage was also demonstrated by imaging and reconstructing a 2D hot spot phantom. In addition to the performance analysis, the distributed Maximum Likelihood Maximization Expectation algorithm with chessboard data partition was evaluated for speeding up image reconstruction for the Compton imaging system. A 1 x 64 distributed computing system speeded computation by about a factor of 22 compared to a single processor. Finally, a real-time signal processing and pattern matching system employing state-of-the-art digital electronics is described for solving problems of event pile-up raised by high photon count rate in the second detector.Ph.D.Biomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60851/1/lhan_1.pd

    On Parallelizing the Em Algorithm for Pet Image Reconstruction

    No full text
      The EM algorithm is one of the most suitable iterative methods for PET image reconstruction; however, it requires a long computation time and an enormous amount of memory space. To overcome these two problems, in this paper, we present two classes of highly efficient parallelization schemes. The essential difference of these two classes is that the inhomogeneous partitioning schemes may partially overlap the communication with computation by deliberate exploitation of the inherent data access pattern with a multiple-ring communication pattern. In theory, the inhomogeneous partitioning schemes may outperform the homogeneous partitioning schemes. However, the latter requires a simpler communication pattern. In the attempt to estimate the achievable performance and analyze the performance degradation factors without actual implementations, we have derived the dffciency prediction formulas closely estimating the performance for the proposed parallelization schemes. We propose new integration and broadcasting algorithms for hypercube, ring, and n-D mesh topologies, which are more efficient than the conventional algorithms when the link setup time is relatively negligible . We believe that the concept of the proposed task and data partitioning schemes, the integration and broadcasting algorithms, and the efficiency estimation methods can be applied to many other problems that are rich in data parallelism, but without a balanced exclusive partitioning.# 0576
    corecore