400 research outputs found
Image reconstruction in fluorescence molecular tomography with sparsity-initialized maximum-likelihood expectation maximization
We present a reconstruction method involving maximum-likelihood expectation
maximization (MLEM) to model Poisson noise as applied to fluorescence molecular
tomography (FMT). MLEM is initialized with the output from a sparse
reconstruction-based approach, which performs truncated singular value
decomposition-based preconditioning followed by fast iterative
shrinkage-thresholding algorithm (FISTA) to enforce sparsity. The motivation
for this approach is that sparsity information could be accounted for within
the initialization, while MLEM would accurately model Poisson noise in the FMT
system. Simulation experiments show the proposed method significantly improves
images qualitatively and quantitatively. The method results in over 20 times
faster convergence compared to uniformly initialized MLEM and improves
robustness to noise compared to pure sparse reconstruction. We also
theoretically justify the ability of the proposed approach to reduce noise in
the background region compared to pure sparse reconstruction. Overall, these
results provide strong evidence to model Poisson noise in FMT reconstruction
and for application of the proposed reconstruction framework to FMT imaging
Incorporating reflection boundary conditions in the Neumann series radiative transport equation: Application to photon propagation and reconstruction in diffuse optical imaging
We propose a formalism to incorporate boundary conditions in a Neumann-series-based radiative transport equation. The formalism accurately models the reflection of photons at the tissue-external medium interface using Fresnel’s equations. The formalism was used to develop a gradient descent-based image reconstruction technique. The proposed methods were implemented for 3D diffuse optical imaging. In computational studies, it was observed that the average root-mean-square error (RMSE) for the output images and the estimated absorption coefficients reduced by 38% and 84%, respectively, when the reflection boundary conditions were incorporated. These results demonstrate the importance of incorporating boundary conditions that model the reflection of photons at the tissue-external medium interface
Generalized Dice Focal Loss trained 3D Residual UNet for Automated Lesion Segmentation in Whole-Body FDG PET/CT Images
Automated segmentation of cancerous lesions in PET/CT images is a vital
initial task for quantitative analysis. However, it is often challenging to
train deep learning-based segmentation methods to high degree of accuracy due
to the diversity of lesions in terms of their shapes, sizes, and radiotracer
uptake levels. These lesions can be found in various parts of the body, often
close to healthy organs that also show significant uptake. Consequently,
developing a comprehensive PET/CT lesion segmentation model is a demanding
endeavor for routine quantitative image analysis. In this work, we train a 3D
Residual UNet using Generalized Dice Focal Loss function on the AutoPET
challenge 2023 training dataset. We develop our models in a 5-fold
cross-validation setting and ensemble the five models via average and
weighted-average ensembling. On the preliminary test phase, the average
ensemble achieved a Dice similarity coefficient (DSC), false-positive volume
(FPV) and false negative volume (FNV) of 0.5417, 0.8261 ml, and 0.2538 ml,
respectively, while the weighted-average ensemble achieved 0.5417, 0.8186 ml,
and 0.2538 ml, respectively. Our algorithm can be accessed via this link:
https://github.com/ahxmeds/autosegnet.Comment: AutoPET-II challenge (2023
On Ontological Openness: Who Is Open? Resonating Thoughts From Continental Philosophers And Muslim Mystics
Being “open-minded” is considered a definite virtue in today’s world. What does it mean to be open-minded? What we refer to as ‘openness’ in this writing moves beyond the ability to see and entertain other views. It cuts deep into both the intentionality and content of what one contemplates. This work focuses on ontological openness, reflecting parallel and resonating thoughts by prominent continental philosophers Martin Heidegger and Hans-Georg Gadamer. Though Gadamer appears after Heidegger, we find it fruitful to read Gadamer as leading to Heidegger. We compare their thoughts with those of Muslim mystics, focusing on the highly influential and ground-breaking thinker Ibn Arabi and thinkers in his tradition
Entanglement of Being and Beings: Heidegger and Ibn Arabi on Sameness and Difference
Martin Heidegger was reported to have considered his work Identity and Difference (based on two seminars delivered in 1957) to be “the most important thing he … published since [his magnum opus] Being and Time.” (Heidegger, 1969, 7) While Being and Time begins with the human being (Da- sein; being-there), aiming to proceed to an understanding of the Being of beings, in Identity and Difference the focus is on the very “relation” between the human being and Being. (Ibid., 8) The present work highlights the intertwined and entangled sameness/difference between beings and Being. This entanglement and belonging, as we shall see, is also one of the most foundational concepts and prominent themes by the renowned and highly influential Muslim mystic Ibn Arabi (1165-1240). We particularly focus on his important compendium of mystical teachings, Fusus al-Hikam (Bezels of Wisdom). We also touch upon the sameness/difference of thoughts between these two thinkers
Implementation of absolute quantification in small-animal SPECT imaging: Phantom and animal studies
Purpose: Presence of photon attenuation severely challenges quantitative accuracy
in single-photon emission computed tomography (SPECT) imaging. Subsequently,
various attenuation correction methods have been developed to compensate for
this degradation. The present study aims to implement an attenuation correction
method and then to evaluate quantification accuracy of attenuation correction in
small-animal SPECT imaging.
Methods: Images were reconstructed using an iterative reconstruction method
based on the maximum-likelihood expectation maximization (MLEM) algorithm
including resolution recovery. This was implemented in our designed dedicated
small-animal SPECT (HiReSPECT) system. For accurate quantification, the voxel values
were converted to activity concentration via a calculated calibration factor. An
attenuation correction algorithm was developed based on the first-order Chang’s
method. Both phantom study and experimental measurements with four rats were
used in order to validate the proposed method.
Results: The phantom experiments showed that the error of �15.5% in the estimation
of activity concentration in a uniform region was reduced to +5.1% when
attenuation correction was applied. For in vivo studies, the average quantitative
error of �22.8 � 6.3% (ranging from �31.2% to �14.8%) in the uncorrected images
was reduced to +3.5 � 6.7% (ranging from �6.7 to +9.8%) after applying attenuation
correction.
Conclusion: The results indicate that the proposed attenuation correction algorithm
based on the first-order Chang’s method, as implemented in our dedicated small-animal
SPECT system, significantly improves accuracy of the quantitative analysis as
well as the absolute quantification
Implementation of absolute quantification in small-animal SPECT imaging: Phantom and animal studies
Purpose: Presence of photon attenuation severely challenges quantitative accuracy
in single-photon emission computed tomography (SPECT) imaging. Subsequently,
various attenuation correction methods have been developed to compensate for
this degradation. The present study aims to implement an attenuation correction
method and then to evaluate quantification accuracy of attenuation correction in
small-animal SPECT imaging.
Methods: Images were reconstructed using an iterative reconstruction method
based on the maximum-likelihood expectation maximization (MLEM) algorithm
including resolution recovery. This was implemented in our designed dedicated
small-animal SPECT (HiReSPECT) system. For accurate quantification, the voxel values
were converted to activity concentration via a calculated calibration factor. An
attenuation correction algorithm was developed based on the first-order Chang’s
method. Both phantom study and experimental measurements with four rats were
used in order to validate the proposed method.
Results: The phantom experiments showed that the error of �15.5% in the estimation
of activity concentration in a uniform region was reduced to +5.1% when
attenuation correction was applied. For in vivo studies, the average quantitative
error of �22.8 � 6.3% (ranging from �31.2% to �14.8%) in the uncorrected images
was reduced to +3.5 � 6.7% (ranging from �6.7 to +9.8%) after applying attenuation
correction.
Conclusion: The results indicate that the proposed attenuation correction algorithm
based on the first-order Chang’s method, as implemented in our dedicated small-animal
SPECT system, significantly improves accuracy of the quantitative analysis as
well as the absolute quantification
A Pre-Logical World to Begin With: Singing New Paradigms
The present work focuses on the pre-logical realm from which different logics and paradigms are derived. It is a realm before us, fully sensed by us, yet unthought by us. We know it, are immersed in it, but do not think it. It defines us, prompts us, motivates us, reveals itself to us, yet remains concealed. Here we focus on commentaries by continental philosophers Nietzsche and Heidegger, as well as the Andalusian Muslim mystic Ibn Arabi in investigation of the pre-logical realm. Relationships between knowledge and power, in this context, are explored. In the light of resonating thoughts by above-mentioned thinkers, we elaborate and revisit laws of thought as well as multiple well-known ‘self-evident’ axioms such as principals of contradiction, identity, non-circular reasoning, and causality (to name a few), which can and should be always revisited, enabling new openings and singing of new paradigms
IgCONDA-PET: Implicitly-Guided Counterfactual Diffusion for Detecting Anomalies in PET Images
Minimizing the need for pixel-level annotated data for training PET anomaly
segmentation networks is crucial, particularly due to time and cost constraints
related to expert annotations. Current un-/weakly-supervised anomaly detection
methods rely on autoencoder or generative adversarial networks trained only on
healthy data, although these are more challenging to train. In this work, we
present a weakly supervised and Implicitly guided COuNterfactual diffusion
model for Detecting Anomalies in PET images, branded as IgCONDA-PET. The
training is conditioned on image class labels (healthy vs. unhealthy) along
with implicit guidance to generate counterfactuals for an unhealthy image with
anomalies. The counterfactual generation process synthesizes the healthy
counterpart for a given unhealthy image, and the difference between the two
facilitates the identification of anomaly locations. The code is available at:
https://github.com/igcondapet/IgCONDA-PET.gitComment: 12 pages, 6 figures, 1 tabl
- …
