30 research outputs found
Multigrid Backprojection Super-Resolution and Deep Filter Visualization
We introduce a novel deep-learning architecture for image upscaling by large
factors (e.g. 4x, 8x) based on examples of pristine high-resolution images. Our
target is to reconstruct high-resolution images from their downscale versions.
The proposed system performs a multi-level progressive upscaling, starting from
small factors (2x) and updating for higher factors (4x and 8x). The system is
recursive as it repeats the same procedure at each level. It is also residual
since we use the network to update the outputs of a classic upscaler. The
network residuals are improved by Iterative Back-Projections (IBP) computed in
the features of a convolutional network. To work in multiple levels we extend
the standard back-projection algorithm using a recursion analogous to
Multi-Grid algorithms commonly used as solvers of large systems of linear
equations. We finally show how the network can be interpreted as a standard
upsampling-and-filter upscaler with a space-variant filter that adapts to the
geometry. This approach allows us to visualize how the network learns to
upscale. Finally, our system reaches state of the art quality for models with
relatively few number of parameters.Comment: Spotlight paper in the Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
์ญ์ฐ์ฐ์ ๊ธฐ๋ฐํ ํฉ์ฑ๊ณฑ์ ๊ฒฝ๋ง์ ์ค๋ช ๋ฐ ์๊ฐํ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2021.8. ๊ถํ์ง.Interpretability and explainability of machine learning systems have received ever-increasing attention, especially for convolutional neural networks (CNN). Although there are various interpretation techniques for learning algorithms, post-hoc local explanation methods (e.g., the attribution method that visualizes pixel-level contribution of input to its corresponding result) are under great interest because they can deal with the high dimensional parameters and nonlinear operations of CNNs. Therefore, this dissertation presents three new post-hoc local explanation methods to visualize and understand the working mechanisms of CNNs.
At first, this dissertation presents a new method called guided nonlinearity (GNL) that improves the performance of attribution by backpropagating only positive gradients through nonlinear operations.
GNL is inspired by the mechanism of action potential (AP) generation in the postsynaptic neuron that depends on the sum of excitatory (EPSP) and inhibitory postsynaptic potentials (IPSP). This dissertation assumes that paths consisting of excitatory synapses faithfully reflect the contributions of inputs to the output. Then this assumption is applied to CNNs by allowing only positive gradients backpropagate through nonlinear operations. Experimental results have shown that GNL outperforms existing methods for computing attributions in terms of the deletion metrics and yields fine-grained and human-interpretable attributions.
However, the attributions from existing methods, including GNL, lack a common theoretical background and sometimes give contradicting results. To address this problem, this dissertation develops the operation-wise inverse method that computes the inverse of prediction in an operation-wise manner by considering that CNNs can be decomposed with four fundamental operations (convolution, max-pooling, ReLU, and fully-connected). The operation-wise inverse process assumes that the forward-pass of CNN is a sequential propagation of physical quantities that indicate the magnitude of specific image features. The inverses of fundamental operations are formulated as constrained optimization problems that inverse results should generate output features consistent with the forward-pass. Then, the inverse of prediction is computed by sequentially applying inverses of fundamental operations of CNN. Experimental results show that the proposed operation-wise approach can be a reference tool for computing attributions because it can provide equivalent visualization results to several conventional methods, and the attributions from the operation-wise method achieve state-of-the-art performances in terms of deletion score.
Although the operation-wise method can provide a reference framework to compute attributions, applying the attribution concept to CNNs with multiple-valued predictions has not yet been addressed because the computation of attribution requires a single scalar value represents the prediction.
To address this problem, this dissertation proposes the layer-wise inverse-based approach by decomposing CNNs into a set of layers that process only positive values that can be interpreted as neural activations.
Then, the inverses of layers are formulated as constrained optimization problems that identify activations-of-interest in lower-layers. Then, the inverse of prediction is computed by sequentially applying inverses of layers of CNN as in the operation-wise method. Experimental results show that the proposed layer-wise inverse-based method can analyze CNNs for classification and regression in the same framework.
Especially for the case of regression, the layer-wise approach showed that conventional CNNs for single image super-resolution overlook a portion of frequency bands that may result in performance degradation.ํด์ ๊ฐ๋ฅํ ๊ธฐ๊ณํ์ต ์๊ณ ๋ฆฌ์ฆ๋ค์ ์ต๊ทผ ๋ง์ ๊ด์ฌ์ ๋ฐ๊ณ ์์ผ๋ฉฐ, ์ด ์ค ํฉ์ฑ๊ณฑ์ ๊ฒฝ๋ง (CNN)์ ์ค๋ช
๋ฐ ์๊ฐํ๋ ์ฃผ์ํ ์ฐ๊ตฌ์ฃผ์ ๋ก์ ์ทจ๊ธ๋๊ณ ์๋ค. ๊ธฐ๊ณํ์ต ์๊ณ ๋ฆฌ์ฆ์ ์ดํดํ๊ธฐ ์ํ ๋ค์ํ ๋ฐฉ๋ฒ ์ค ํนํ ์ฃผ์ด์ง ์๊ณ ๋ฆฌ์ฆ์ ์์ธก ๊ฒฐ๊ณผ์ ๋ํ ์
๋ ฅ์ ๊ธฐ์ฌ๋๋ฅผ ์๊ฐํํ๋ ๊ท์ธ (attribution)๊ณผ ๊ฐ์ ์ฌํ๊ฒ์ (post-hoc) ๊ตญ์์ค๋ช
(local explanation) ๋ฐฉ๋ฒ์ ๊ณ ์ฐจ์ ๋งค๊ฐ ๋ณ์๋ฅผ ๊ฐ์ง ๋น์ ํ ํจ์์ ์ ์ฉํ ์ ์์ด์ CNN์ ์ค๋ช
๋ฐ ์๊ฐํ์ ์ฃผ์ํ ๋ฐฉ๋ฒ์ผ๋ก ์ฌ์ฉ๋๊ณ ์๋ค. ์ด์ ๋ฐ๋ผ ๋ณธ ๋
ผ๋ฌธ์ CNN์ ์๋ ์๋ฆฌ๋ฅผ ์๊ฐํํ๊ณ ์ดํดํ๋๋ฐ ์ฌ์ฉ๋ ์ ์๋ ์ธ ๊ฐ์ง ์ฌํ๊ฒ์ ๊ตญ์์ค๋ช
๋ฐฉ๋ฒ๋ค์ ์ ์ํ๋ค.
์ฒซ ๋ฒ์งธ๋ก, ๋ณธ ๋
ผ๋ฌธ์ ๋น์ ํ ์ฐ์ฐ์ ์์ ๊ธฐ์ธ๊ธฐ (positive valued gradient)๋ง ์ญ์ ํ (backpropagation)ํ์ฌ ๊ท์ธ ์ฑ๋ฅ์ ํฅ์์ํค๋ ์ ๋๋๋น์ ํ๋ฒ (guided nonlinearity method)์ ์ ์ํ๋ค. ์ ๋๋๋น์ ํ๋ฒ์ ์ค๊ณ๋ ํฅ๋ถ์ฑ ๋ฐ ์ต์ ์ฑ ์๋
์ค ํ ์ ์์ ํฉ์ ์์กดํ๋ ์๋
์ค ํ ๋ด๋ฐ์ ํ๋ ์ ์ ์์ฑ ๋ฉ์ปค๋์ฆ์ผ๋ก๋ถํฐ ๋น๋กฏ๋์๋ค. ๋ณธ ๋
ผ๋ฌธ์ ํฅ๋ถ์ฑ ์๋
์ค๋ก ๊ตฌ์ฑ๋ ๊ฒฝ๋ก๊ฐ ์ถ๋ ฅ์ ๋ํ ์
๋ ฅ์ ๊ธฐ์ฌ๋๋ฅผ ์ถฉ์คํ๊ฒ ๋ฐ์ํ๊ณ ์๋ค๊ณ ๊ฐ์ ํ์๋ค. ๊ทธ ํ, ๋ณธ ๋
ผ๋ฌธ์ ๋น์ ํ ์ฐ์ฐ์ ์์ ๊ธฐ์ธ๊ธฐ๋ง ์ญ์ ํ ๋๋๋ก ํ์ฉํจ์ผ๋ก์จ ์ด ๊ฐ์ ์ CNN์ ์ค๋ช
๋ฐ ์๊ฐํ์ ์ ์ฉํ ์ ์๋๋ก ๊ตฌํํ์๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์คํ์ ํตํด, ์ ์๋ ์ ๋๋๋น์ ํ๋ฒ์ด ์ญ์ ์ฒ๋ (deletion metric) ์ธก๋ฉด์์ ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ค๋ณด๋ค ํฅ์๋ ์ฑ๋ฅ์ ๋ณด์ด๋ฉฐ ํด์ ๊ฐ๋ฅํ๊ณ ์ธ๋ฐํ (fine-grained) ๊ท์ธ์ ์ฐ์ถํจ์ ๋ณด์๋ค.
๊ทธ๋ฌ๋ ์ ๋๋๋น์ ํ๋ฒ์ ํฌํจํ ๊ธฐ์กด์ ๊ท์ธ ๋ฐฉ๋ฒ๋ค์ ์๋ก ๋ค๋ฅธ ์ด๋ก ์ ๊ธฐ๋ฐ์ผ๋ก ์ค๊ณ๋์์ผ๋ฉฐ, ์ด๋ก ์ธํ์ฌ ์๋ก ๋ชจ์๋๋ ๊ท์ธ๋ค์ ๊ณ์ฐํ๋ ๋๋ ์๋ค. ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ณธ ๋
ผ๋ฌธ์์๋ CNN์ด ํฉ์ฑ๊ณฑ (convolution), ์ต๋ํ๋ง (max-pooling), ReLU, ์ ์ฐ๊ฒฐ (full-connected)์ 4๊ฐ์ง ๊ธฐ๋ณธ ์ฐ์ฐ๋ค์ ํฉ์ฑํจ์๋ก ํํ๋ ์ ์๋ค๋ ์ ์ ๊ธฐ๋ฐํ์ฌ, CNN์ ํตํ ์์ธก์ ์ญ์ (inverse image)์ ๊ธฐ๋ณธ ์ฐ์ฐ๋ค์ ์ญ์ฐ์ฐ์ ํตํด ๊ณ์ฐํ๋ ์ฐ์ฐ๋ณ์ญ์ฐ์ฐ๋ฒ (operation-wise inverse-based method)์ ์ ์ํ๋ค. ์ฐ์ฐ๋ณ์ญ์ฐ์ฐ๋ฒ์ CNN์ ์ ๋ฐฉํฅ์งํ (forward-pass)์ ํน์ ์ด๋ฏธ์งํน์ง (image feature)์ ํฌ๊ธฐ๋ฅผ ์๋ฏธํ๋ ๋ฌผ๋ฆฌ๋์ ์์ฐจ์ ์ ํ๋ก ๊ฐ์ ํ๋ค. ์ด ๊ฐ์ ํ์ ์ฐ์ฐ๋ณ์ญ์ฐ์ฐ๋ฒ์ ๊ณ์ฐ๋ ์ญ์์ด ๊ธฐ์กด์ ์ ๋ฐฉํฅ์งํ ๊ฒฐ๊ณผ์ ๋ชจ์๋์ง ์๋๋ก ์ค๊ณ๋ ์ ํ๋ ์ต์ ํ ๋ฌธ์ (constrained optimization problem)๋ฅผ ํตํด ๊ธฐ๋ณธ ์ฐ์ฐ์ ์ญ์ฐ์ฐ์ ๊ณ์ฐํ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์คํ์ ํตํด ์ฐ์ฐ๋ณ์ญ์ฐ์ฐ๋ฒ์ด ๊ธฐ์กด์ ์ฌ๋ฌ ๊ท์ธ ๋ฐฉ๋ฒ๋ค๋ณด๋ค ์ญ์ ์ฒ๋ ์ธก๋ฉด์์ ํฅ์๋์์ผ๋ฉด์๋ ์ง์ ์ธก๋ฉด์์ ์ ์ฌํ ์๊ฐํ ๊ฒฐ๊ณผ๋ฅผ ์ ๊ณตํ๋ ๊ฒ์ ๋ณด์์ผ๋ก์จ ์ฐ์ฐ๋ณ์ญ์ฐ์ฐ๋ฒ์ด ๊ท์ธ๊ณ์ฐ์ ๊ณตํต ํ๋ ์ ์ํฌ (reference framework)๋ก ์ฌ์ฉ๋ ์ ์์์ ๋ณด์๋ค.
ํํธ, ์์ ๋ถ๋ฅ ๋ฌธ์ ์ ๊ฐ์ด ๋จ์ผ ์์ธก์ ๋์์ผ๋ก ํ CNN๊ณผ๋ ๋ฌ๋ฆฌ ๋ณต์์ ์์ธก๊ฐ์ ๊ฐ์ง๋ CNN์ ๋ํ์ฌ ๊ท์ธ๊ณ์ฐ์ ์๋ํ ์ฐ๊ตฌ๋ ํ์ฌ๊น์ง ๋ณด๊ณ ๋์ง ์์๋ค. ์ด๋ ๊ธฐ์กด์ ๊ท์ธ ๊ณ์ฐ๋ฐฉ๋ฒ๋ค์ CNN์ ๋ํ์ฌ ๋จ์ผ ์ค์นผ๋ผ (scalar) ๊ฐ์ ์ถ๋ ฅํ๋๋ก ์๊ตฌํ๊ธฐ ๋๋ฌธ์ด๋ค. ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ณธ ๋
ผ๋ฌธ์์๋ ๊ณ์ธต๋ณ์ญ์ฐ์ฐ๋ฒ (layer-wise inverse-based method)์ ์ ์ํ๋ค. ๊ณ์ธต๋ณ์ญ์ฐ์ฐ๋ฒ์ CNN์ ์ธ๊ณต ๋ด๋ฐ์ ํ์ฑ๊ฐ (neural activation)์ผ๋ก ํด์ํ ์ ์๋ ์์ ์ค์๋ค์ ์
์ถ๋ ฅ์ผ๋ก ํ๋ ๊ณ์ธต (layer)์ผ๋ก ๋ถํดํ๊ณ , ์ ํ๋ ์ต์ ํ ๋ฌธ์ ๋ก ์ ์๋๋ ๊ฐ ๊ณ์ธต์ ์ญ์ฐ์ฐ์ ์ ๋ฐฉํฅ์งํ ๊ฒฐ๊ณผ์ ์์ฐจ์ ์ผ๋ก ์ ์ฉํจ์ผ๋ก์จ CNN์ ํตํ ์์ธก์ ์ญ์์ ๊ณ์ฐํ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์คํ์ ํตํด, ์ ์๋ ๊ณ์ธต๋ณ์ญ์ฐ์ฐ๋ฒ์ด ์์ ๋ถ๋ฅ ๋ฐ ํ๊ธฐ๋ฅผ ๋์์ผ๋ก ํ CNN๋ค์ ์ค๋ช
๋ฐ ์๊ฐํ๋ฅผ ๋์ผํ ํ๋ ์ ์ํฌ (common framework)์์ ์ฒ๋ฆฌํ ์ ์์์ ํ์ธํ์๋ค. ๋ํ, ๋ณธ ๋
ผ๋ฌธ์ ๊ณ์ธต๋ณ์ญ์ฐ์ฐ๋ฒ์ ํตํด ๋จ์ผ ์์ ๊ณ ํด์ํ (single image super-resolution)๋ฅผ ๋์์ผ๋ก ํ CNN์ธ VDSR์ด ์
๋ ฅ ์์์ ์ฃผํ์ ๋์ญ์ ์ผ๋ถ๋ฅผ ๊ฐ๊ณผํ๊ณ ์๊ณ ์ด๋ VDSR์ ํตํ ๊ณ ํด์ํ์ ํน์ ์ฃผํ์ ๋์ญ์์ ์์ ํ์ง์ ํ๋ฝ์ ์ ๋ฐํ ์ ์์์ ๋ณด์๋ค.Contents 1
List of Tables 4
List of Figures 5
1 Introduction 7
1.1 Guided Nonlinearity 8
1.2 Inverse-based approach 9
1.2.1 Operation-wise method 10
1.2.2 Layer-wise method 11
1.3 Outline 14
2 RelatedWork 15
2.1 Activation-based approach 15
2.2 Perturbation-based approach 16
2.3 Backpropagation-based approach 17
2.4 Inverse-based approach 18
3 Guided Nonlinearity 19
3.1 Motivation and Overview 19
3.2 Proposed Guided Non-linearity 23
3.2.1 Integrated Gradients 23
3.2.2 Postulations 23
3.2.3 Proposed method 24
3.3 Experimental Results 27
3.3.1 Evaluation Metrics 29
3.3.2 Experiment details 29
3.3.3 Results and Discussions 30
3.4 Summary 30
4 Operation-wise Approach 32
4.1 Motivation and Overview 32
4.2 Proposed Method 35
4.2.1 Problem statement 36
4.2.2 Proposed constraints 36
4.2.3 Mathematical formulation 37
4.3 Implementation details 38
4.3.1 Inverse of ReLU and Max Pooling 38
4.3.2 Inverse of Fully Connected and Convolution Layers 39
4.4 Experimental Settings 40
4.4.1 Qualitative results 40
4.4.2 Quantitative Results 46
4.5 Summary 50
5 Layer-wise Approach 51
5.1 Motivation and Overview 51
5.2 Formulation of the Proposed Inverse Approach 55
5.2.1 Activation range 56
5.2.2 Minimal activation 56
5.2.3 Linear approximation 57
5.2.4 Layer-wise inverse 57
5.3 Details of inverse computation 59
5.3.1 Convolution block (linear part) + ReLU 59
5.3.2 Max-pooling layer 60
5.3.3 Fully connected block (linear part) + ReLU 61
5.3.4 Fully connected block (linear part) + Softmax 61
5.4 Application to the ImageNet classification task 62
5.4.1 Evaluation of output-reconstruction in terms of input-simplicity 62
5.4.2 Deletion and insertion scores 63
5.4.3 Selection of the regularization term weight 64
5.4.4 Comparison to Existing Methods 67
5.4.5 Output-reconstruction versus input-simplicity plot 68
5.4.6 Ablation study of the activation regularization 72
5.5 The inverse of single image super-resolution network 72
5.5.1 Experimental setting 72
5.5.2 Selection of the regularization term weight 74
5.5.3 Evaluation of the proposed inverse process 77
5.5.4 Frequency domain analysis of attribution 77
5.6 Summary 81
6 Conclusions 82
Bibliography 84
Abstract (In Korean) 95๋ฐ
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Interpretability methods aim to help users build trust in and understand the
capabilities of machine learning models. However, existing approaches often
rely on abstract, complex visualizations that poorly map to the task at hand or
require non-trivial ML expertise to interpret. Here, we present two visual
analytics modules that facilitate an intuitive assessment of model reliability.
To help users better characterize and reason about a model's uncertainty, we
visualize raw and aggregate information about a given input's nearest
neighbors. Using an interactive editor, users can manipulate this input in
semantically-meaningful ways, determine the effect on the output, and compare
against their prior expectations. We evaluate our interface using an
electrocardiogram beat classification case study. Compared to a baseline
feature importance interface, we find that 14 physicians are better able to
align the model's uncertainty with domain-relevant factors and build intuition
about its capabilities and limitations
Recommended from our members
3D Shape Understanding and Generation
In recent years, Machine Learning techniques have revolutionized solutions to longstanding image-based problems, like image classification, generation, semantic segmentation, object detection and many others. However, if we want to be able to build agents that can successfully interact with the real world, those techniques need to be capable of reasoning about the world as it truly is: a tridimensional space. There are two main challenges while handling 3D information in machine learning models. First, it is not clear what is the best 3D representation. For images, convolutional neural networks (CNNs) operating on raster images yield the best results in virtually all image-based benchmarks. For 3D data, the best combination of model and representation is still an open question. Second, 3D data is not available on the same scale as images โ taking pictures is a common procedure in our daily lives, whereas capturing 3D content is an activity usually restricted to specialized professionals. This thesis is focused on addressing both of these issues. Which model and representation should we use for generating and recognizing 3D data? What are efficient ways of learning 3D representations from a few examples? Is it possible to leverage image data to build models capable of reasoning about the world in 3D?
Our research findings show that it is possible to build models that efficiently generate 3D shapes as irregularly structured representations. Those models require significantly less memory while generating higher quality shapes than the ones based on voxels and multi-view representations. We start by developing techniques to generate shapes represented as point clouds. This class of models leads to high quality reconstructions and better unsupervised feature learning. However, since point clouds are not amenable to editing and human manipulation, we also present models capable of generating shapes as sets of shape handles -- simpler primitives that summarize complex 3D shapes and were specifically designed for high-level tasks and user interaction. Despite their effectiveness, those approaches require some form of 3D supervision, which is scarce. We present multiple alternatives to this problem. First, we investigate how approximate convex decomposition techniques can be used as self-supervision to improve recognition models when only a limited number of labels are available. Second, we study how neural network architectures induce shape priors that can be used in multiple reconstruction tasks -- using both volumetric and manifold representations. In this regime, reconstruction is performed from a single example -- either a sparse point cloud or multiple silhouettes. Finally, we demonstrate how to train generative models of 3D shapes without using any 3D supervision by combining differentiable rendering techniques and Generative Adversarial Networks
Optimization of the Parameters of the YAP-(S)PETII Scanner for SPECT Acquisition
Abstract
Single Photon Emission Computed Tomography (SPECT) could be considered as a milestone in terms of biomedical imaging technique, which visualizes Functional processes in-vivo, based on the emission of gamma rays produced within the body. The most distinctive feature of SPECT from other imaging modalities is that it is based on the tracer principle, discovered by George Charles de Hevesy in the first decade of the twentieth century.
As known by everyone, the metabolism of an organism is composed of atoms within a molecule which can be replaced by one of its radioactive isotopes. By using this principle, we are able to follow and detect pathways of the photons which are emitted from the radioactive element inside the metabolism.
SPECT produces images by using a gamma camera which consists of two major functional components, the collimator and the radiation detector. The collimator is a thick sheet of a heavy metal like lead, tungsten of gold with densely packed small holes and is put just in front of the photon detector. The radiation detector converts the gamma rays into scintillation light photons.
In conventional SPECT, scanners utilize a parallel hole collimator. Defining a small solid angle, each collimator hole is located somewhere along this line and the photons might reach the detector by passing through these holes. Subsequently, we can create projection images of the radioisotope distribution. The quantity of photons which come to the radiation detector through the collimator holes specifies the image quality regarding signal to noise ratio.
One of the crucial parts of all SPECT scanners is the collimator design. The main part of this dissertation is to investigate performance characteristics of YAP-(S)PETII scanner collimator and to obtain collimator characteristics curves for optimization purposes.
Before starting the collimator performance investigation of YAP-(S)PETII scanner, we first performed simulation of it in SPECT mode with point source Tc-99m to measure collimator and system efficiency by using GATEโthe Geant4 Application for Emission Tomography. GATE is an advanced, flexible, precise, opensource Monte Carlo toolkit developed by the international OpenGATE collaboration and dedicated to the numerical simulations in medical imaging. We obtained the results of collimator and system efficiency in terms of collimator length, radius and septa by using GATE_v4. Then, we compared our results with analytical formulation of efficiency and resolution. For those simulation experiments, we found that the difference between the simulated and the analytical results with regard to approximated geometrical collimator efficiency formulation of H. Anger, is within 20%. Then, we wrote a new ASCII sorter algorithm, which reads ASCII output of GATE_v4 and then creates a sinogram and reconstructs it to see the final simulation results.
At the beginning, we used the analytical reconstruction method, filtered back projection (FBP), but this method produces severely blurred images. To solve this problem and increase our image quality, we tried different mathematical filters, like ramp, sheep-logan, low-pass cosine filters. After all of those studies mentioned above, we learned that GATE_v4 is not practical to measure collimator efficiency and resolution. On the other hand, the results of GATE_v4 did not show directly septal penetrated photon ratio. Under the light of these findings, we decided to develop a new user-friendly ray tracing program for optimization of low energy general purpose (LEGP) parallelhole collimators.
In addition, we tried to evaluate the image quality and quantify the impact of high-energy contamination for I-123 isotope imaging. Due to its promising chemical characteristics, Iodine-123 is increasingly used in SPECT studies. 159 keV photons are used for imaging, however, high-energy photons result in an error in the projection data primarily by penetration of the collimator and scattering inside the crystal with energy close to the photons used for imaging. One of the way to minimize this effect is using a double energy window (DEW) method, because, it decreases noise in main (sensitive) energy window. By using this method, we tried to determine the difference between simulated and experimental projection results and scattered photon ratio (Sk) value of YAP-(S)PETII scanner for I-123 measurements.
The main drawback of GATE simulations is that they are CPU-intensive. In this dissertation to handle this problem, we did the feasibility study of the Fully Monte Carlo based implementation of the system matrix derivation of YAP-(S)PETII scanner by using XtreemOS platform. To manage lifecycle of the simulation on the top XtreemOS, we developed a set of scripts. The main purpose of our study is to integrate a distributed platform like XtreemOS to reduce the overall simulation completion time and increase the feasibility of SPECT simulations in a research environment and establish an accurate and fast method for deriving the system matrix of the YAP-(S)PETII scanner by using Monte Carlo simulation approach.
We developed also the ML-EM Algorithm to the reconstruct our GATE simulation results and to derive the system matrix directly from GATE output. In addition to the accuracy consideration, we intend to develop a flexible matrix derivation method and GATE output reconstruction tool