379 research outputs found

    Consensus image method for unknown noise removal

    Get PDF
    Noise removal has been, and it is nowadays, an important task in computer vision. Usually, it is a previous task preceding other tasks, as segmentation or reconstruction. However, for most existing denoising algorithms the noise model has to be known in advance. In this paper, we introduce a new approach based on consensus to deal with unknown noise models. To do this, different filtered images are obtained, then combined using multifuzzy sets and averaging aggregation functions. The final decision is made by using a penalty function to deliver the compromised image. Results show that this approach is consistent and provides a good compromise between filters.This work is supported by the European Commission under Contract No. 238819 (MIBISOC Marie Curie ITN). H. Bustince was supported by Project TIN 2010-15055 of the Spanish Ministry of Science

    Research Status and Prospect for CT Imaging

    Get PDF
    Computed tomography (CT) is a very valuable imaging method and plays an important role in clinical diagnosis. As people pay more and more attention to radiation doses these years, decreasing CT radiation dose without affecting image quality is a hot direction for research of medical imaging in recent years. This chapter introduces the research status of low-dose technology from following aspects: low-dose scan implementation, reconstruction methods and image processing methods. Furthermore, other technologies related to the development tendency of CT, such as automatic tube current modulation technology, rapid peak kilovoltage (kVp) switching technology, dual-source CT technology and Nano-CT, are also summarized. Finally, the future research prospect are discussed and analyzed

    Attenuation correction of myocardial perfusion scintigraphy images without transmission scanning

    Get PDF
    Attenuation correction is essential for reliable interpretation of emission tomography; however the use of transmission measurements to generate attenuation maps is limited by availability of equipment and potential mismatches between the transmission and emission measurements. This work investigates the possibility of estimating an attenuation map using measured scatter data without a transmission scan. A scatter model has been developed that predicts the distribution of photons which have been scattered once. The scatter model has been used as the basis of a maximum likelihood gradient ascent method (SMLGA) to estimate an attenuation map from measured scatter data. The SMLGA algorithm has been combined with an existing algorithm using photopeak data to estimate an attenuation map (MLAA) in order to obtain a more accurate attenuation map than using either algorithm alone. Iterations of the SMLGA-MLAA algorithm are alternated with iterations of the MLEM algorithm to estimate the activity distribution. Initial tests of the algorithm were performed in 2 dimensions using idealised data before extension to 3 dimensions. The basic algorithm has been tested in 3 dimensions using projection data simulated using a Monte Carlo simulator with software phantoms. All soft tissues within the body have similar attenuation characteristics and so only a small number of different values are normally present. A Level-Set technique to restrict the attenuation map to a piecewise constant function has therefore been investigated as a potential way to improve the quality of the reconstructed attenuation map. The basic SMLGA-MLAA algorithm contains a number of assumptions; the effect of these has been investigated and the model extended to include the effect of photons which are scattered more than once and scatter correction of the photopeak. The effect of different phantom shapes and activity distributions has been assessed and the final algorithm tested using data acquired using a physical phantom

    A Novel Adaptive Probabilistic Nonlinear Denoising Approach for Enhancing PET Data Sinogram

    Get PDF
    We propose filtering the PET sinograms with a constraint curvature motion diffusion. The edge-stopping function is computed in terms of edge probability under the assumption of contamination by Poisson noise. We show that the Chi-square is the appropriate prior for finding the edge probability in the sinogram noise-free gradient. Since the sinogram noise is uncorrelated and follows a Poisson distribution, we then propose an adaptive probabilistic diffusivity function where the edge probability is computed at each pixel. The filter is applied on the 2D sinogram prereconstruction. The PET images are reconstructed using the Ordered Subset Expectation Maximization (OSEM). We demonstrate through simulations with images contaminated by Poisson noise that the performance of the proposed method substantially surpasses that of recently published methods, both visually and in terms of statistical measures
    • …
    corecore