537 research outputs found

    Objective assessment of image quality (OAIQ) in fluorescence-enhanced optical imaging

    Get PDF
    The statistical evaluation of molecular imaging approaches for detecting, diagnosing, and monitoring molecular response to treatment are required prior to their adoption. The assessment of fluorescence-enhanced optical imaging is particularly challenging since neither instrument nor agent has been established. Small animal imaging does not address the depth of penetration issues adequately and the risk of administering molecular optical imaging agents into patients remains unknown. Herein, we focus upon the development of a framework for OAIQ which includes a lumpy-object model to simulate natural anatomical tissue structure as well as the non-specific distribution of fluorescent contrast agents. This work is required for adoption of fluorescence-enhanced optical imaging in the clinic. Herein, the imaging system is simulated by the diffusion approximation of the time-dependent radiative transfer equation, which describes near infra-red light propagation through clinically relevant volumes. We predict the time-dependent light propagation within a 200 cc breast interrogated with 25 points of excitation illumination and 128 points of fluorescent light collection. We simulate the fluorescence generation from Cardio-Green at tissue target concentrations of 1, 0.5, and 0.25 µM with backgrounds containing 0.01 µM. The fluorescence boundary measurements for 1 cc spherical targets simulated within lumpy backgrounds of (i) endogenous optical properties (absorption and scattering), as well as (ii) exogenous fluorophore crosssection are generated with lump strength varying up to 100% of the average background. The imaging data are then used to validate a PMBF/CONTN tomographic reconstruction algorithm. Our results show that the image recovery is sensitive to the heterogeneous background structures. Further analysis on the imaging data by a Hotelling observer affirms that the detection capability of the imaging system is adversely affected by the presence of heterogeneous background structures. The above issue is also addressed using the human-observer studies wherein multiple cases of randomly located targets superimposed on random heterogeneous backgrounds are used in a “double-blind” situation. The results of this study show consistency with the outcome of above mentioned analyses. Finally, the Hotelling observer’s analysis is used to demonstrate (i) the inverse correlation between detectability and target depth, and (ii) the plateauing of detectability with improved excitation light rejection

    Adaptive finite element methods for fluorescence enhanced optical tomography

    Get PDF
    Fluorescence enhanced optical tomography is a promising molecular imaging modality which employs a near infrared fluorescent molecule as an imaging agent and time-dependent measurements of fluorescent light propagation and generation. In this dissertation a novel fluorescence tomography algorithm is proposed to reconstruct images of targets contrasted by fluorescence within the tissues from boundary fluorescence emission measurements. An adaptive finite element based reconstruction algorithm for high resolution, fluorescence tomography was developed and validated with non-contact, planewave frequency-domain fluorescence measurements on a tissue phantom. The image reconstruction problem was posed as an optimization problem in which the fluorescence optical property map which minimized the difference between the experimentally observed boundary fluorescence and that predicted from the diffusion model was sought. A regularized Gauss-Newton algorithm was derived and dual adaptive meshes were employed for solution of coupled photon diffusion equations and for updating the fluorescence optical property map in the tissue phantom. The algorithm was developed in a continuous function space setting in a mesh independent manner. This allowed the meshes to adapt during the tomography process to yield high resolution images of fluorescent targets and to accurately simulate the light propagation in tissue phantoms from area-illumination. Frequency-domain fluorescence data collected at the illumination surface was used for reconstructing the fluorescence yield distribution in a 512 cm3, tissue phantom filled with 1% Liposyn solution. Fluorescent targets containing 1 micro-molar Indocyanine Green solution in 1% Liposyn and were suspended at the depths of up to 2cm from the illumination surface. Fluorescence measurements at the illumination surface were acquired by a gain-modulated image intensified CCD camera system outfitted with holographic band rejection and optical band pass filters. Excitation light at the phantom surface source was quantified by utilizing cross polarizers. Rayleigh resolution studies to determine the minimum detectable sepatation of two embedded fluorescent targets was attempted and in the absence of measurement noise, resolution down to the transport limit of 1mm was attained. The results of this work demonstrate the feasibility of high-resolution, molecular tomography in clinic with rapid non-contact area measurements

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Reconstruction Algorithms for Novel Joint Imaging Techniques in PET

    Get PDF
    Positron emission tomography (PET) is an important functional in vivo imaging modality with many clinical applications. Its enormously wide range of applications has made both research and industry combine it with other imaging modalities such as X-ray computed tomography (CT) or magnetic resonance imaging (MRI). The general purpose of this work is to study two cases in PET where the goal is to perform image reconstruction jointly on two data types. The first case is the Beta-Gamma image reconstruction. Positron emitting isotopes, such as 11C, 13N, and 18F, can be used to label molecules, and tracers, such as 11CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to low positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (Simultaneous Beta-Gamma Imager). The imager can simultaneously detect positrons (β+) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs the beta and gamma images independently to estimate the thickness component of the beta forward model, and then jointly estimates the radioactivity distribution in the object. We have validated the physics model and the reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed 11CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the Structural Similarity (SSIM) index for comparing the leaf images from the Simultaneous Beta-Gamma Imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively. The second case is the virtual-pinhole PET technology, which has shown that higher resolution and contrast recovery can be gained by adding a high resolution PET insert with smaller crystals to a conventional PET scanner. Such enhancements are obtained when the insert is placed in proximity of the region of interest (ROI) and in coincidence with the conventional PET scanner. Intuitively, the insert may be positioned within the scanner\u27s axial field-of-view (FOV) and radially closer to the ROI than the scanner\u27s ring. One of the complicating factors of this design is the insert\u27s blocking the scanner\u27s lines-of-response (LORs). Such data may be compensated through attenuation and scatter correction in image reconstruction. However, a potential solution is to place the insert outside of the scanner\u27s axial FOV and to move the body to be in proximity of the insert. We call this imaging strategy the surveillance mode. As the main focus of this work, we have developed an image reconstruction framework for the surveillance mode imaging. The preliminary results show improvement in spatial resolution and contrast recovery. Any improvement in contrast recovery should result in enhancement in tumor detectability, which will be of high clinical significance

    Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography

    Get PDF
    Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation. In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm. Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm. The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose

    Reconstruction Algorithms for Novel Joint Imaging Techniques in PET

    Get PDF
    Positron emission tomography (PET) is an important functional in vivo imaging modality with many clinical applications. Its enormously wide range of applications has made both research and industry combine it with other imaging modalities such as X-ray computed tomography (CT) or magnetic resonance imaging (MRI). The general purpose of this work is to study two cases in PET where the goal is to perform image reconstruction jointly on two data types. The first case is the Beta-Gamma image reconstruction. Positron emitting isotopes, such as 11C, 13N, and 18F, can be used to label molecules, and tracers, such as 11CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to low positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (Simultaneous Beta-Gamma Imager). The imager can simultaneously detect positrons (β+) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs the beta and gamma images independently to estimate the thickness component of the beta forward model, and then jointly estimates the radioactivity distribution in the object. We have validated the physics model and the reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed 11CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the Structural Similarity (SSIM) index for comparing the leaf images from the Simultaneous Beta-Gamma Imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively. The second case is the virtual-pinhole PET technology, which has shown that higher resolution and contrast recovery can be gained by adding a high resolution PET insert with smaller crystals to a conventional PET scanner. Such enhancements are obtained when the insert is placed in proximity of the region of interest (ROI) and in coincidence with the conventional PET scanner. Intuitively, the insert may be positioned within the scanner\u27s axial field-of-view (FOV) and radially closer to the ROI than the scanner\u27s ring. One of the complicating factors of this design is the insert\u27s blocking the scanner\u27s lines-of-response (LORs). Such data may be compensated through attenuation and scatter correction in image reconstruction. However, a potential solution is to place the insert outside of the scanner\u27s axial FOV and to move the body to be in proximity of the insert. We call this imaging strategy the surveillance mode. As the main focus of this work, we have developed an image reconstruction framework for the surveillance mode imaging. The preliminary results show improvement in spatial resolution and contrast recovery. Any improvement in contrast recovery should result in enhancement in tumor detectability, which will be of high clinical significance

    Evaluation of probabilistic photometric redshift estimation approaches for the Rubin Observatory Legacy Survey of Space and Time (LSST)

    Get PDF
    Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics

    Evaluation of probabilistic photometric redshift estimation approaches for The Rubin Observatory Legacy Survey of Space and Time (LSST)

    Get PDF
    Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics
    • …
    corecore