128 research outputs found

    Development of a Non-Contact Fluorescence Tomography System with Appropriate Reconstruction Techniques

    Get PDF
    Molecular Imaging is a highly topical research field based on the combination of highly selective markers and appropriate imaging devices. It is mainly concerned with studying the effects of prototype drugs in small animals, following day by day the evolution of the disease in vivo. Amongst the imaging techniques available, fluorescence based imaging is very popular due to the simplicity of the experimental systems and the widespread availability of suitable probes. As however light is heavily scattered in tissue, fluorescence images depend heavily on the inclusion depth so that different images cannot be compared. Fluorescence mediated tomography (FMT) as presented herein is hoped to overcome these shortcomings by providing a quantitative means of estimating fluorochrome concentrations in vivo. Usually, FMT-systems rely on detector readings obtained through light guiding fibers mounted in contact to the imaged animals. Recently, non-contact methods have been proposed, allowing CCD-camera images to be used as projection data. Herein, a study is presented comparing fiber-based and non-contact imaging methods and reliable indicates for the first time the superiority of non-contact techniques. Based on these findings, a novel non-contact tomography system for small animals was developed. In phantoms as well as in an animal study the capabilities of the system to reconstruct fluorescent sources in turbid media are demonstrated

    Topics in image reconstruction for high resolution positron emission tomography

    Get PDF
    Les problèmes mal posés représentent un sujet d'intérêt interdisciplinaire qui surgires dans la télédétection et des applications d'imagerie. Cependant, il subsiste des questions cruciales pour l'application réussie de la théorie à une modalité d'imagerie. La tomographie d'émission par positron (TEP) est une technique d'imagerie non-invasive qui permet d'évaluer des processus biochimiques se déroulant à l'intérieur d'organismes in vivo. La TEP est un outil avantageux pour la recherche sur la physiologie normale chez l'humain ou l'animal, pour le diagnostic et le suivi thérapeutique du cancer, et l'étude des pathologies dans le coeur et dans le cerveau. La TEP partage plusieurs similarités avec d'autres modalités d'imagerie tomographiques, mais pour exploiter pleinement sa capacité à extraire le maximum d'information à partir des projections, la TEP doit utiliser des algorithmes de reconstruction d'images à la fois sophistiquée et pratiques. Plusieurs aspects de la reconstruction d'images TEP ont été explorés dans le présent travail. Les contributions suivantes sont d'objet de ce travail: Un modèle viable de la matrice de transition du système a été élaboré, utilisant la fonction de réponse analytique des détecteurs basée sur l'atténuation linéaire des rayons y dans un banc de détecteur. Nous avons aussi démontré que l'utilisation d'un modèle simplifié pour le calcul de la matrice du système conduit à des artefacts dans l'image. (IEEE Trans. Nucl. Sei., 2000) );> La modélisation analytique de la dépendance décrite à l'égard de la statistique des images a simplifié l'utilisation de la règle d'arrêt par contre-vérification (CV) et a permis d'accélérer la reconstruction statistique itérative. Cette règle peut être utilisée au lieu du procédé CV original pour des projections aux taux de comptage élevés, lorsque la règle CV produit des images raisonnablement précises. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une méthodologie de régularisation utilisant la décomposition en valeur propre (DVP) de la matrice du système basée sur l'analyse de la résolution spatiale. L'analyse des caractéristiques du spectre de valeurs propres nous a permis d'identifier la relation qui existe entre le niveau optimal de troncation du spectre pour la reconstruction DVP et la résolution optimale dans l'image reconstruite. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une nouvelle technique linéaire de reconstruction d'image événement-par-événement basée sur la matrice pseudo-inverse régularisée du système. L'algorithme représente une façon rapide de mettre à jour une image, potentiellement en temps réel, et permet, en principe, la visualisation instantanée de distribution de la radioactivité durant l'acquisition des données tomographiques. L'image ainsi calculée est la solution minimisant les moindres carrés du problème inverse régularisé.Abstract: Ill-posed problems are a topic of an interdisciplinary interest arising in remote sensing and non-invasive imaging. However, there are issues crucial for successful application of the theory to a given imaging modality. Positron emission tomography (PET) is a non-invasive imaging technique that allows assessing biochemical processes taking place in an organism in vivo. PET is a valuable tool in investigation of normal human or animal physiology, diagnosing and staging cancer, heart and brain disorders. PET is similar to other tomographie imaging techniques in many ways, but to reach its full potential and to extract maximum information from projection data, PET has to use accurate, yet practical, image reconstruction algorithms. Several topics related to PET image reconstruction have been explored in the present dissertation. The following contributions have been made: (1) A system matrix model has been developed using an analytic detector response function based on linear attenuation of [gamma]-rays in a detector array. It has been demonstrated that the use of an oversimplified system model for the computation of a system matrix results in image artefacts. (IEEE Trans. Nucl. Sci., 2000); (2) The dependence on total counts modelled analytically was used to simplify utilisation of the cross-validation (CV) stopping rule and accelerate statistical iterative reconstruction. It can be utilised instead of the original CV procedure for high-count projection data, when the CV yields reasonably accurate images. (IEEE Trans. Nucl. Sci., 2001); (3) A regularisation methodology employing singular value decomposition (SVD) of the system matrix was proposed based on the spatial resolution analysis. A characteristic property of the singular value spectrum shape was found that revealed a relationship between the optimal truncation level to be used with the truncated SVD reconstruction and the optimal reconstructed image resolution. (IEEE Trans. Nucl. Sci., 2001); (4) A novel event-by-event linear image reconstruction technique based on a regularised pseudo-inverse of the system matrix was proposed. The algorithm provides a fast way to update an image potentially in real time and allows, in principle, for the instant visualisation of the radioactivity distribution while the object is still being scanned. The computed image estimate is the minimum-norm least-squares solution of the regularised inverse problem

    Dense real-time 3D reconstruction from multiple images

    Get PDF
    The rapid increase in computer graphics and acquisition technologies has led to the widespread use of 3D models. Techniques for 3D reconstruction from multiple views aim to recover the structure of a scene and the position and orientation (motion) of the camera using only the geometrical constraints in 2D images. This problem, known as Structure from Motion (SfM) has been the focus of a great deal of research effort in recent years; however, the automatic, dense, real-time and accurate reconstruction of a scene is still a major research challenge. This thesis presents work that targets the development of efficient algorithms to produce high quality and accurate reconstructions, introducing new computer vision techniques for camera motion calibration, dense SfM reconstruction and dense real-time 3D reconstruction. In SfM, a second challenge is to build an effective reconstruction framework that provides dense and high quality surface modelling. This thesis develops a complete, automatic and flexible system with a simple user-interface of `raw images to 3D surface representation'. As part of the proposed image reconstruction approach, this thesis introduces an accurate and reliable region-growing algorithm to propagate the dense matching points from the sparse key points among all stereo pairs. This dense 3D reconstruction proposal addresses the deficiencies of existing SfM systems built on sparsely distributed 3D point clouds which are insufficient for reconstructing a complete 3D model of a scene. The existing SfM reconstruction methods perform a bundle adjustment optimization of the global geometry in order to obtain an accurate model. Such an optimization is very computational expensive and cannot be implemented in a real-time application. Extended Kalman Filter (EKF) Simultaneous Localization and Mapping (SLAM) considers the problem of concurrently estimating in real-time the structure of the surrounding world, perceived by moving sensors (cameras), simultaneously localizing in it. However, standard EKF-SLAM techniques are susceptible to errors introduced during the state prediction and measurement prediction linearization.

    Information theoretic regularization in diffuse optical tomography

    Get PDF
    Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering these parameters of interest involves solving a non-linear and severely ill-posed inverse problem. In this thesis we propose methods towards the regularization of DOT via the introduction of spatially unregistered, a priori information from alternative high resolution anatomical modalities, using the information theory concepts of joint entropy (JE) and mutual information (MI). Such functionals evaluate the similarity between the reconstructed optical image and the prior image, while bypassing the multi-modality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the modalities involved. By introducing structural a priori information in the image reconstruction process, we aim to improve the spatial resolution and quantitative accuracy of the solution. A further condition for the accurate incorporation of a priori information is the establishment of correct alignment between the prior image and the probed anatomy in a common coordinate system. However, limited information regarding the probed anatomy is known prior to the reconstruction process. In this work we explore the potentiality of spatially registering the prior image simultaneously with the solution of the reconstruction process. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results obtained by numerical simulations as well as experimental data. In addition we compare the performance of MI and JE. Finally, we propose a method for fast joint entropy evaluation and optimization, which we later employ for the information theoretic regularization of DOT. The main areas involved in this thesis are: inverse problems, image reconstruction & regularization, diffuse optical tomography and medical image registration

    Camera self-calibration and analysis of singular cases

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Models and image: reconstruction in electrical impedance tomography of human brain function

    Get PDF
    Electrical Impedance Tomography (EIT) of brain function has the potential to provide a rapid portable bedside neuroimaging device. Recently, our group published the first ever EIT images of evoked activity recorded with scalp electrodes. While the raw data showed encouraging, reproducible changes of a few per cent, the images were noisy. The poor image quality was due, in part, to the use of a simplified reconstruction algorithm which modelled the head as a homogeneous sphere. The purpose of this work has been to develop new algorithms in which the model incorporates extracerebral layers and realistic geometry, and to assess their effect on image quality. An algorithm was suggested which allowed fair comparison between reconstructions assuming analytical and numerical (Finite Element Method - FEM) models of the head as a homogeneous sphere and as concentric spheres representing the brain, CSF, skull and scalp. Comparison was also made between these and numerical models of the head as a homogeneous, head-shaped volume and as a head-shaped volume with internal compartments of contrasting resistivity. The models were tested on computer simulations, on spherical and head-shaped, saline-filled tanks and on data collected during human evoked response studies. EIT also has the potential to image resistance changes which occur during neuronal depolarization in the cortex and last tens of milliseconds. Also presented in this thesis is an estimate of their magnitude made using a mathematical model, based on cable theory, of resistance changes at DC during depolarization in the cerebral cortex. Published values were used for the electrical properties and geometry of cell processes (Rail, 1975). The study was performed in order to estimate the resultant scalp signal that might be obtained and to assess the ability of EIT to produce images of neuronal depolarization

    SPECT System Design Optimisation for a Simultaneous SPECT/MRI Clinical Scanner

    Get PDF
    The aim of this project was to optimize the design of a Single Photon Emission Computed Tomography (SPECT) insert based on high-resolution detectors and a high-sensitivity collimator, for a Magnetic Resonance Imaging (MRI) scanner, in order to perform simultaneous human brain SPECT/MRI and improve radionuclide-based therapies for glioma patients. The radionuclides of interest are 99mTc, 111In and 123I. Specific emphasis was given to the collimator and overall system design, data simulation and performance assessment, which would feed directly into the European-funded INSERT project. The SPECT insert was to consist of a stationary system with SiPM-based photodetectors, insensitive to magnetic fields. Regarding the design, a number of system and collimator geometries were evaluated considering the restricted space in the MRI bore and the limited angular sampling. High sensitivity was prioritised over high spatial resolution, because of the clinical application. Gamma shielding design was also addressed. Analytical calculations of system sensitivity and resolution, in addition to Monte Carlo simulations, were performed to compare various slit-slat and pinhole collimator designs. A new collimator design was proposed: multi-mini-slit slit-slat (MSS) collimator. The MSS has multiple mini-slits, some of which are shared between adjacent detectors, and they are embedded in the slat component, allowing for longer slats in comparison to a standard slit-slat collimator. The MSS design demonstrated to have the best overall performance, and the final system design consisted of a partial ring with 20 detectors. A framework for geometrical calibration of the system was developed and assessed, utilising a single prototype detector equipped with a prototype collimator. This framework takes advantage of the specific collimator design to estimate geometrical parameters from independent measurements of calibration phantoms. Experimental evaluation with tomographic acquisition of phantoms demonstrated the applicability of the new collimation concept, confirming the superiority of the MSS design over equivalent pinhole collimation

    Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical

    Get PDF
    An inverse problem is a mathematical framework that is used to obtain info about a physical object or system from observed measurements. It usually appears when we wish to obtain information about internal data from outside measurements and has many applications in science and technology such as medical imaging, geophysical imaging, image deblurring, image inpainting, electromagnetic scattering, acoustics, machine learning, mathematical finance, physics, etc. The main goal of this PhD thesis was to use state-of-the-art inverse problem techniques to develop modern reconstruction methods for solving the fluorescence diffuse optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small animals. One of the main difficulties in fDOT is that the high absorption and scattering properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique and unstable solutions to the reconstruction problem. Thus, the problem requires regularization to achieve a stable solution. The so called “non-contact fDOT scanners” are based on using CCDs as virtual detectors instead of optic fibers in contact with the sample. These non-contact systems generate huge datasets that lead to computationally demanding inverse problem. Therefore, techniques to minimize the size of the acquired datasets without losing image performance are highly advisable. The first part of this thesis addresses the optimization of experimental setups to reduce the dataset size, by using l₂–based regularization techniques. The second part, based on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part introduces compressed sensing (CS) theory, which enables further reduction of the acquired dataset size. The main contributions of this thesis are: 1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve method to select the regularization parameter (l₂–norm). The U-curve method has shown to be an excellent automatic method to deal with large datasets because it reduces the regularization parameter search to a suitable interval. 2) Once we found an automatic method to choose the l₂ regularization parameter for fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the information content in acquired measurements and minimize the computational cost. It was shown for the first time that large meshes can be reduced in the z direction, without any loss in imaging performance but reducing computational times and memory requirements. 3) Dealing with l₁–based regularization techniques, we presented a novel iterative algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method (ART) in handling large datasets with Split Bregman (SB) denoising, an approach which has been shown to be optimum for Total Variation (TV) denoising. SB has been implemented in a cost-efficient way to handle large datasets. This makes ART-SB more computationally efficient than previous TV-based reconstruction algorithms and most splitting approaches. 4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one. Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB iteration to a certain space where the singular values of the forward matrix and the sparsity structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig of the forward matrix while designing (learning) the analysis operator and finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of image quality and needs fewer acquisition parameters. The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener información sobre un fenómeno físico a partir de una serie de observaciones, medidas o datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc. El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia (FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva la distribución espacial 3D de la concentración de sondas moleculares fluorescentes en animales pequeños in-vivo. Una de las dificultades principales del problema inverso en fDOT, es que, debido a la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el problema debe ser regularizado para obtener una solución estable. Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta razón, es importante optimizar el sistema, es decir, maximizar la información contenida en los datos adquiridos a la vez que minimizamos el coste computacional. La primera parte de esta tesis se centra en optimizar el sistema de adquisición, reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de “compressed sensing” (CS), que permite también reducir el número de datos necesarios sin por ello perder calidad de imagen. Las contribuciones principales de esta tesis son: 1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que dicho método ofrece un intervalo donde encontrar el parámetro de regularización. 2) Una vez encontrado el método automático de selección de parámetro de regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis de valores singulares (SVA), con la finalidad de maximizar la información contenida en los datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene mejores resultados que el uso convencional de una distribución isotrópica del mismo. 3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”. 4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT, llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso (co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la solución alcanzada en cada iteración a un espacio determinado, donde los valores singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de imagen, requiriendo menor número de parámetros de adquisición. Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos experimentales
    corecore