6 research outputs found

    Attenuation correction of myocardial perfusion scintigraphy images without transmission scanning

    Get PDF
    Attenuation correction is essential for reliable interpretation of emission tomography; however the use of transmission measurements to generate attenuation maps is limited by availability of equipment and potential mismatches between the transmission and emission measurements. This work investigates the possibility of estimating an attenuation map using measured scatter data without a transmission scan. A scatter model has been developed that predicts the distribution of photons which have been scattered once. The scatter model has been used as the basis of a maximum likelihood gradient ascent method (SMLGA) to estimate an attenuation map from measured scatter data. The SMLGA algorithm has been combined with an existing algorithm using photopeak data to estimate an attenuation map (MLAA) in order to obtain a more accurate attenuation map than using either algorithm alone. Iterations of the SMLGA-MLAA algorithm are alternated with iterations of the MLEM algorithm to estimate the activity distribution. Initial tests of the algorithm were performed in 2 dimensions using idealised data before extension to 3 dimensions. The basic algorithm has been tested in 3 dimensions using projection data simulated using a Monte Carlo simulator with software phantoms. All soft tissues within the body have similar attenuation characteristics and so only a small number of different values are normally present. A Level-Set technique to restrict the attenuation map to a piecewise constant function has therefore been investigated as a potential way to improve the quality of the reconstructed attenuation map. The basic SMLGA-MLAA algorithm contains a number of assumptions; the effect of these has been investigated and the model extended to include the effect of photons which are scattered more than once and scatter correction of the photopeak. The effect of different phantom shapes and activity distributions has been assessed and the final algorithm tested using data acquired using a physical phantom

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Image intensity normalisation by maximising the Siddon line integral in the joint intensity distribution space

    No full text
    This paper presents a novel data-driven method for image intensity normalisation, which is a prerequisite step for any kind of image comparison. The method involves a novel application of the Siddon algorithm that was developed initially for fast reconstruction of tomographic images and is based on a linear normalisation model with either one or two parameters. The latter are estimated by maximising the line integral, computed using the Siddon algorithm, in the 2D joint intensity distribution space of image pairs. The proposed normalisation method, referred to as Siddon Line Integral Maximisation (SLIM), was compared with three other methodologies, namely background ratio (BAR) scaling, linear fitting and proportional scaling, using a large number of synthesised datasets. SLIM was also compared with BAR normalisation when applied to phantom data and two clinical examples. The new method was found to be more accurate and less biased than its counterparts for the range of characteristics selected for the synthesised data. These findings were in agreement with the results from the analysis of the experimental and clinical data. (C) 2009 Elsevier B.V. All rights reserved

    Relationship between synoptic circulations and the spatial distributions of rainfall in Zimbabwe

    Get PDF
    This study examines how the atmospheric circulation patterns in Africa south of the equator govern the spatial distribution of precipitation in Zimbabwe. The moisture circulation patterns are designated by an ample set of eight classified circulation types (CTs). Here it is shown that all wet CTs over Zimbabwe features enhanced cyclonic/convective activity in the southwest Indian Ocean. Therefore, enhanced moisture availability in the southwest Indian Ocean is necessary for rainfall formation in parts of Zimbabwe. The wettest CT in Zimbabwe is characterized by a ridging South Atlantic Ocean high-pressure, south of South Africa, driving an abundance of southeast moisture fluxes, from the southwest Indian Ocean into Zimbabwe. Due to the proximity of Zimbabwe to the Agulhas and Mozambique warm current, the activity of the ridging South Atlantic Ocean anticyclone is a dominant synoptic feature that favors above-average rainfall in Zimbabwe. Also, coupled with a weaker state of the Mascarene high, it is shown that a ridging South Atlantic Ocean high-pressure, south of South Africa, can be favorable for the southwest movement of tropical cyclones into the eastern coastal landmasses resulting in above-average rainfall in Zimbabwe. The driest CT is characterized by the northward track of the Southern Hemisphere mid-latitude cyclones leading to enhanced westerly fluxes in the southwest Indian Ocean, limiting moist southeast winds into Zimbabwe

    Natural or anthropogenic variability? A long-term pattern of the zooplankton communities in an ever-changing transitional ecosystem

    Get PDF
    The Venice Lagoon is an important site belonging to the Italian Long-Term Ecological Research Network (LTER). Alongside with the increasing trend of water temperature and the relevant morphological changes, in recent years, the resident zooplankton populations have also continued to cope with the colonization by alien species, particularly the strong competitor Mnemiopsis leidyi. In this work, we compared the dynamics of the lagoon zooplankton over a period of 20 years. The physical and biological signals are analyzed and compared to evaluate the hypothesis that a slow shift in the environmental balance of the site, such as temperature increase, sea level rise (hereafter called “marinization”), and competition between species, is contributing to trigger a drift in the internal equilibrium of the resident core zooplankton. Though the copepod community does not seem to have changed its state, some important modifications of structure and assembly mechanisms have already been observed. The extension of the marine influence within the lagoon has compressed the spatial gradients of the habitat and created a greater segregation of the niches available to some typically estuarine taxa and broadened and strengthened the interactions between marine species
    corecore