17 research outputs found

    Equivariant Spherical CNN for Data Efficient and High-Performance Medical Image Processing

    Full text link
    This work highlights the significance of equivariant networks as efficient and high-performance approaches for tomography applications. Our study builds upon the limitations of Convolutional Neural Networks (CNNs), which have shown promise in post-processing various medical imaging systems. However, the efficiency of conventional CNNs heavily relies on an undiminished and proper training set. To tackle this issue, in this study, we introduce an equivariant network, aiming to reduce CNN's dependency on specific training sets. We evaluate the efficacy of equivariant CNNs on spherical signals for tomographic medical imaging problems. Our results demonstrate superior quality and computational efficiency of spherical CNNs (SCNNs) in denoising and reconstructing benchmark problems. Furthermore, we propose a novel approach to employ SCNNs as a complement to conventional image reconstruction tools, enhancing the outcomes while reducing reliance on the training set. Across all cases, we observe a significant decrease in computational costs while maintaining the same or higher quality of image processing using SCNNs compared to CNNs. Additionally, we explore the potential of this network for broader tomography applications, particularly those requiring omnidirectional representation

    Reassessing the atmospheric oxidation mechanism of toluene

    Get PDF
    Photochemical oxidation of aromatic hydrocarbons leads to tropospheric ozone and secondary organic aerosol (SOA) formation, with profound implications for air quality, human health, and climate. Toluene is the most abundant aromatic compound under urban environments, but its detailed chemical oxidation mechanism remains uncertain. From combined laboratory experiments and quantum chemical calculations, we show a toluene oxidation mechanism that is different from the one adopted in current atmospheric models. Our experimental work indicates a larger-than-expected branching ratio for cresols, but a negligible formation of ring-opening products (e.g., methylglyoxal). Quantum chemical calculations also demonstrate that cresols are much more stable than their corresponding peroxy radicals, and, for the most favorable OH (ortho) addition, the pathway of H extraction by O_2 to form the cresol proceeds with a smaller barrier than O_2 addition to form the peroxy radical. Our results reveal that phenolic (rather than peroxy radical) formation represents the dominant pathway for toluene oxidation, highlighting the necessity to reassess its role in ozone and SOA formation in the atmosphere

    Modélisation et régularisation en reconstruction tomographique pour l'imagerie par caméra Compton

    No full text
    La caméra Compton est un dispositif d'imagerie gamma pour la tomographie par émission monophotonique (TEMP). Les améliorations qu’elle pourrait apporter aux applications médicales font l’objet de nombreuses études. Elles dépendent à la fois des développements instrumentaux et des techniques de traitement des données, dont la reconstruction tomographique est une étape-clé. Le but de cette thèse est d’améliorer les performances des algorithmes de reconstruction itératifs calculant le maximum de vraisemblance en mode liste (LM-MLEM). En imagerie avec une caméra Compton, le modèle d’acquisition est basé sur l’intégrale de l’intensité de la source sur des projections coniques. La modélisation des incertitudes de mesure influence fortement le résultat de LM-MLEM. L’une des contributions de cette étude est de montrer qu’un modèle plus précis de l’élargissement Doppler, validé avec la simulation Monte-Carlo, conduit à des images plus quantitatives. Une autre contribution est une méthode de régularisation par contrôle de la variation totale (TV) pour des données distribuées selon une loi de Poisson. La régularisation TV permet d’améliorer fortement la qualité des images pour les acquisitions à faible statistique rencontrées dans les applications de ce type d’imagerie. Nous avons également étudié l’apport de la déconvolution avec la fonction d’étalement du point dans le domaine de l’image, en conjonction avec la régularisation. Cette approche est utilisée pour corriger les effets physiques trop difficile à modéliser dans la fonction de transfert. Toutes les méthodes ont été validées avec des simulations Monte-Carlo.The Compton camera is an imaging device for SPECT (Single Particle Emission Computed Tomography) of increased sensitivity compared to the Anger camera as it does not require mechanical collimation. The goal of this thesis is to evaluate the improvements that Compton camera may bring for nuclear medicine applications, depending both on technological developments and data processing techniques, among which the tomographic reconstruction is currently a bottleneck. In Compton camera imaging, the acquisition model is based on the integral of the intensity of the source on conical shapes. Modeling the measurement uncertainties in the system matrix can strongly influence the result of the list mode MLEM iterative reconstruction algorithm. One of the contributions of this study is a more precise model validated by Monte Carlo simulation. Another contribution concerns regularization methods. We developed a total variation denoising algorithm for Poisson distributed data that we introduced in the MLEM reconstruction as a regularization step, which allows to improve the image quality in low-counts experiments. A total variation regularized EM reconstruction with PSF deconvolution in the image space is also proposed for ameliorating the conditioning of the inverse problem and restoring the resolution of reconstructed images. All the proposed methods were validated on Monte Carlo simulation

    Modélisation et régularisation en reconstruction tomographique pour l'imagerie par caméra Compton

    No full text
    The Compton camera is an imaging device for SPECT (Single Particle Emission Computed Tomography) of increased sensitivity compared to the Anger camera as it does not require mechanical collimation. The goal of this thesis is to evaluate the improvements that Compton camera may bring for nuclear medicine applications, depending both on technological developments and data processing techniques, among which the tomographic reconstruction is currently a bottleneck. In Compton camera imaging, the acquisition model is based on the integral of the intensity of the source on conical shapes. Modeling the measurement uncertainties in the system matrix can strongly influence the result of the list mode MLEM iterative reconstruction algorithm. One of the contributions of this study is a more precise model validated by Monte Carlo simulation. Another contribution concerns regularization methods. We developed a total variation denoising algorithm for Poisson distributed data that we introduced in the MLEM reconstruction as a regularization step, which allows to improve the image quality in low-counts experiments. A total variation regularized EM reconstruction with PSF deconvolution in the image space is also proposed for ameliorating the conditioning of the inverse problem and restoring the resolution of reconstructed images. All the proposed methods were validated on Monte Carlo simulation.La caméra Compton est un dispositif d'imagerie gamma pour la tomographie par émission monophotonique (TEMP). Les améliorations qu’elle pourrait apporter aux applications médicales font l’objet de nombreuses études. Elles dépendent à la fois des développements instrumentaux et des techniques de traitement des données, dont la reconstruction tomographique est une étape-clé. Le but de cette thèse est d’améliorer les performances des algorithmes de reconstruction itératifs calculant le maximum de vraisemblance en mode liste (LM-MLEM). En imagerie avec une caméra Compton, le modèle d’acquisition est basé sur l’intégrale de l’intensité de la source sur des projections coniques. La modélisation des incertitudes de mesure influence fortement le résultat de LM-MLEM. L’une des contributions de cette étude est de montrer qu’un modèle plus précis de l’élargissement Doppler, validé avec la simulation Monte-Carlo, conduit à des images plus quantitatives. Une autre contribution est une méthode de régularisation par contrôle de la variation totale (TV) pour des données distribuées selon une loi de Poisson. La régularisation TV permet d’améliorer fortement la qualité des images pour les acquisitions à faible statistique rencontrées dans les applications de ce type d’imagerie. Nous avons également étudié l’apport de la déconvolution avec la fonction d’étalement du point dans le domaine de l’image, en conjonction avec la régularisation. Cette approche est utilisée pour corriger les effets physiques trop difficile à modéliser dans la fonction de transfert. Toutes les méthodes ont été validées avec des simulations Monte-Carlo

    Tomographic reconstruction from Poisson distributed data: a fast and convergent EM-TV dual approach

    No full text
    This paper focuses on tomographic reconstruction for nuclear medicine imaging, where the classical approach consists to maximize the likelihood of Poisson distributed data using the iterative Expectation Maximization algorithm. In this context and when the quantity of acquired data is low and produces low signal-to-noise ratio in the images, a step forward consists to incorporate a total variation a priori on the solution into a MAP-EM formulation. The novelty of this paper is to propose a convergent and efficient numerical scheme to compute the MAP-EM optimizer, based on a splitting approach which alternates an EM step and a dual-TV-minimization step. The main theoretical result is the proof of stability and convergence of this scheme. Moreover, we also present some numerical experiments in which our algorithm appears at least as efficient and accurate as some other reference algorithms from the literature

    Effect of temperature and NH

    No full text
    In order to improve the efficiency of biological nitrogen removal, the experiment used the luffa cylindrical sponge carrier sequencing batch biofilm reactor to treat domestic sewage, and it studied the temperature on the removal effect of TN in the sewage in the reactor and the changes of various types of nitrogen. The results showed that the TN treatment rate of the luffa cylindrical sponge carrier SBBR reached the peak at 30 °C, the removal rate was 82.25%, indicating that the luffa cylindrical sponge carrier SBBR is very suitable for the removal of nitrogen from domestic sewage

    3D reconstruction benchmark of a Compton camera against a parallel hole gamma-camera on ideal data

    No full text
    International audienceCompton cameras and collimated gamma cameras are competing devices suitable for prompt gamma detection in range verification of particle therapy. In this study, we evaluate the first approach from the point of view of the tomographic reconstruction step by comparing it to the second. We clear any technological constraints by considering a simple geometry, ideal detecting stages, a mono-energetic synthetic phantom. To this end, both analytic (filtered back-projection) and iterative (list-mode maximum likelihood expectation maximization) algorithms are applied in conjunction with total variation denoising. It was shown in previous studies that compared to the mechanically collimated camera, the Compton camera has a higher efficiency. Factors between ten and one hundred were reported. Meanwhile, for each detected event the emission position of the original photon lies on a conical surface for Compton cameras, instead of a line for collimated cameras. This leads to a supplementary degree of freedom that logically calls during image reconstruction for a larger data set and may cancel out the benefits of the superior efficiency. We consider here a static Compton camera and a rotating collimated camera with similar angular coverage. We show empirically that reconstruction from conical projections requires ten times more detected events to obtain at least the same image quality. In some experiments, the Compton camera allows to avoid severe artefacts produced by the limited-angle geometry of the collimated camera. In addition, the Compton camera should be better suited for imaging high-energy and poly-energetic sources

    Total variation and point spread function priors for MLEM reconstruction in Compton camera imaging

    Get PDF
    International audienceThe Compton camera is a gamma ray imaging device already employed in astronomy and still in investigation for clinical domain. A key point in the imaging process is the to-mographic reconstruction step. When the acquisition parameters and the a priori information are correctly accounted for, iterative algorithms are able to produce accurate images by compensating for measurement uncertainties and statistical noise. In this work we focus on the list-mode maximum likelihood expectation maximization (LM-MLEM) algorithm with smoothness a priori information expressed by the total variation norm. This type of regularization is particularly well suited for low-dose acquisitions, as it is the case in the applications foreseen for the camera. We show that the TV a priori strongly improves the images when data are acquired in ideal conditions. For realistic data, this a priori is not sufficient and deconvolution with a pre-calculated image-space kernel should also be considered
    corecore