221 research outputs found

    Topics in image reconstruction for high resolution positron emission tomography

    Get PDF
    Les problèmes mal posés représentent un sujet d'intérêt interdisciplinaire qui surgires dans la télédétection et des applications d'imagerie. Cependant, il subsiste des questions cruciales pour l'application réussie de la théorie à une modalité d'imagerie. La tomographie d'émission par positron (TEP) est une technique d'imagerie non-invasive qui permet d'évaluer des processus biochimiques se déroulant à l'intérieur d'organismes in vivo. La TEP est un outil avantageux pour la recherche sur la physiologie normale chez l'humain ou l'animal, pour le diagnostic et le suivi thérapeutique du cancer, et l'étude des pathologies dans le coeur et dans le cerveau. La TEP partage plusieurs similarités avec d'autres modalités d'imagerie tomographiques, mais pour exploiter pleinement sa capacité à extraire le maximum d'information à partir des projections, la TEP doit utiliser des algorithmes de reconstruction d'images à la fois sophistiquée et pratiques. Plusieurs aspects de la reconstruction d'images TEP ont été explorés dans le présent travail. Les contributions suivantes sont d'objet de ce travail: Un modèle viable de la matrice de transition du système a été élaboré, utilisant la fonction de réponse analytique des détecteurs basée sur l'atténuation linéaire des rayons y dans un banc de détecteur. Nous avons aussi démontré que l'utilisation d'un modèle simplifié pour le calcul de la matrice du système conduit à des artefacts dans l'image. (IEEE Trans. Nucl. Sei., 2000) );> La modélisation analytique de la dépendance décrite à l'égard de la statistique des images a simplifié l'utilisation de la règle d'arrêt par contre-vérification (CV) et a permis d'accélérer la reconstruction statistique itérative. Cette règle peut être utilisée au lieu du procédé CV original pour des projections aux taux de comptage élevés, lorsque la règle CV produit des images raisonnablement précises. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une méthodologie de régularisation utilisant la décomposition en valeur propre (DVP) de la matrice du système basée sur l'analyse de la résolution spatiale. L'analyse des caractéristiques du spectre de valeurs propres nous a permis d'identifier la relation qui existe entre le niveau optimal de troncation du spectre pour la reconstruction DVP et la résolution optimale dans l'image reconstruite. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une nouvelle technique linéaire de reconstruction d'image événement-par-événement basée sur la matrice pseudo-inverse régularisée du système. L'algorithme représente une façon rapide de mettre à jour une image, potentiellement en temps réel, et permet, en principe, la visualisation instantanée de distribution de la radioactivité durant l'acquisition des données tomographiques. L'image ainsi calculée est la solution minimisant les moindres carrés du problème inverse régularisé.Abstract: Ill-posed problems are a topic of an interdisciplinary interest arising in remote sensing and non-invasive imaging. However, there are issues crucial for successful application of the theory to a given imaging modality. Positron emission tomography (PET) is a non-invasive imaging technique that allows assessing biochemical processes taking place in an organism in vivo. PET is a valuable tool in investigation of normal human or animal physiology, diagnosing and staging cancer, heart and brain disorders. PET is similar to other tomographie imaging techniques in many ways, but to reach its full potential and to extract maximum information from projection data, PET has to use accurate, yet practical, image reconstruction algorithms. Several topics related to PET image reconstruction have been explored in the present dissertation. The following contributions have been made: (1) A system matrix model has been developed using an analytic detector response function based on linear attenuation of [gamma]-rays in a detector array. It has been demonstrated that the use of an oversimplified system model for the computation of a system matrix results in image artefacts. (IEEE Trans. Nucl. Sci., 2000); (2) The dependence on total counts modelled analytically was used to simplify utilisation of the cross-validation (CV) stopping rule and accelerate statistical iterative reconstruction. It can be utilised instead of the original CV procedure for high-count projection data, when the CV yields reasonably accurate images. (IEEE Trans. Nucl. Sci., 2001); (3) A regularisation methodology employing singular value decomposition (SVD) of the system matrix was proposed based on the spatial resolution analysis. A characteristic property of the singular value spectrum shape was found that revealed a relationship between the optimal truncation level to be used with the truncated SVD reconstruction and the optimal reconstructed image resolution. (IEEE Trans. Nucl. Sci., 2001); (4) A novel event-by-event linear image reconstruction technique based on a regularised pseudo-inverse of the system matrix was proposed. The algorithm provides a fast way to update an image potentially in real time and allows, in principle, for the instant visualisation of the radioactivity distribution while the object is still being scanned. The computed image estimate is the minimum-norm least-squares solution of the regularised inverse problem

    Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework

    Full text link
    The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains.Comment: 34 pages, 8 figures, 4 table

    Assessment and optimisation of 3D optical topography for brain imaging

    Get PDF
    Optical topography has recently evolved into a widespread research tool for non-invasively mapping blood flow and oxygenation changes in the adult and infant cortex. The work described in this thesis has focused on assessing the potential and limitations of this imaging technique, and developing means of obtaining images which are less artefactual and more quantitatively accurate. Due to the diffusive nature of biological tissue, the image reconstruction is an ill-posed problem, and typically under-determined, due to the limited number of optodes (sources and detectors). The problem must be regularised in order to provide meaningful solutions, and requires a regularisation parameter (\lambda), which has a large influence on the image quality. This work has focused on three-dimensional (3D) linear reconstruction using zero-order Tikhonov regularisation and analysis of different methods to select the regularisation parameter. The methods are summarised and applied to simulated data (deblurring problem) and experimental data obtained with the University College London (UCL) optical topography system. This thesis explores means of optimising the reconstruction algorithm to increase imaging performance by using spatially variant regularisation. The sensitivity and quantitative accuracy of the method is investigated using measurements on tissue-equivalent phantoms. Our optical topography system is based on continuous-wave (CW) measurements, and conventional image reconstruction methods cannot provide unique solutions, i.e., cannot separate tissue absorption and scattering simultaneously. Improved separation between absorption and scattering and between the contributions of different chromophores can be obtained by using multispectral image reconstruction. A method is proposed to select the optimal wavelength for optical topography based on the multispectral method that involves determining which wavelengths have overlapping sensitivities. Finally, we assess and validate the new three-dimensional imaging tools using in vivo measurements of evoked response in the infant brain

    On-line Electrical Impedance Tomography for Industrial Batch Processing

    Get PDF

    A Functional Approach to Deconvolve Dynamic Neuroimaging Data.

    Get PDF
    Positron emission tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on the assumption that linear first-order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. To provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire three-dimensional volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both one-dimensional functions and 2D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain.The research of Ci-Ren Jiang is supported in part by NSC 101-2118-M-001-013-MY2 (Taiwan); the research of Jane-Ling Wang is supported by NSF grants, DMS-09-06813 and DMS-12-28369. JA is supported by EPSRC grant EP/K021672/2. The authors would like to thank SAMSI and the NDA programme where some of this research was carried out.This is the final version of the article. It first appeared from Taylor & Francis via http://dx.doi.org/10.1080/01621459.2015.106024

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Some interacting particle methods with non-standard interactions

    Get PDF
    Interacting particle methods are widely used to perform inference in complex models, with applications ranging from Bayesian statistics to applied sciences. This thesis is concerned with the study of families of interacting particles which present non-standard interactions. The non-standard interactions that we study arise from the particular class of problems we are interested in, Fredholm integral equations of the first kind or from algorithmic design, as in the case of the Divide and Conquer sequential Monte Carlo algorithm. Fredholm integral equations of the first kind are a class of inverse ill-posed problems for which finding numerical solutions remains challenging. These equations are ubiquitous in applied sciences and engineering, with applications in epidemiology, medical imaging, nonlinear regression settings and partial differential equations. We develop two interacting particle methods which provide an adaptive stochastic discretisation and do not require strong assumptions on the solution. While similar to well-studied families of interacting particle methods the two algorithms that we develop present non-standard elements and require a novel theoretical analysis. We study the theoretical properties of the two proposed algorithms, establishing a strong law of large numbers and Lp error estimates, and compare their performances with alternatives on a suite of examples, including simulated data and realistic systems. The Divide and Conquer sequential Monte Carlo algorithm is an interacting particle method in which different sequential Monte Carlo approximations are merged together according to the topology of a given tree. We study the effect of the additional interactions due to the merging operations on the theoretical properties of the algorithm. Specifically, we show that the approximation error decays at rate
    corecore