36 research outputs found

    Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical

    Get PDF
    An inverse problem is a mathematical framework that is used to obtain info about a physical object or system from observed measurements. It usually appears when we wish to obtain information about internal data from outside measurements and has many applications in science and technology such as medical imaging, geophysical imaging, image deblurring, image inpainting, electromagnetic scattering, acoustics, machine learning, mathematical finance, physics, etc. The main goal of this PhD thesis was to use state-of-the-art inverse problem techniques to develop modern reconstruction methods for solving the fluorescence diffuse optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small animals. One of the main difficulties in fDOT is that the high absorption and scattering properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique and unstable solutions to the reconstruction problem. Thus, the problem requires regularization to achieve a stable solution. The so called “non-contact fDOT scanners” are based on using CCDs as virtual detectors instead of optic fibers in contact with the sample. These non-contact systems generate huge datasets that lead to computationally demanding inverse problem. Therefore, techniques to minimize the size of the acquired datasets without losing image performance are highly advisable. The first part of this thesis addresses the optimization of experimental setups to reduce the dataset size, by using l₂–based regularization techniques. The second part, based on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part introduces compressed sensing (CS) theory, which enables further reduction of the acquired dataset size. The main contributions of this thesis are: 1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve method to select the regularization parameter (l₂–norm). The U-curve method has shown to be an excellent automatic method to deal with large datasets because it reduces the regularization parameter search to a suitable interval. 2) Once we found an automatic method to choose the l₂ regularization parameter for fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the information content in acquired measurements and minimize the computational cost. It was shown for the first time that large meshes can be reduced in the z direction, without any loss in imaging performance but reducing computational times and memory requirements. 3) Dealing with l₁–based regularization techniques, we presented a novel iterative algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method (ART) in handling large datasets with Split Bregman (SB) denoising, an approach which has been shown to be optimum for Total Variation (TV) denoising. SB has been implemented in a cost-efficient way to handle large datasets. This makes ART-SB more computationally efficient than previous TV-based reconstruction algorithms and most splitting approaches. 4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one. Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB iteration to a certain space where the singular values of the forward matrix and the sparsity structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig of the forward matrix while designing (learning) the analysis operator and finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of image quality and needs fewer acquisition parameters. The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener información sobre un fenómeno físico a partir de una serie de observaciones, medidas o datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc. El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia (FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva la distribución espacial 3D de la concentración de sondas moleculares fluorescentes en animales pequeños in-vivo. Una de las dificultades principales del problema inverso en fDOT, es que, debido a la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el problema debe ser regularizado para obtener una solución estable. Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta razón, es importante optimizar el sistema, es decir, maximizar la información contenida en los datos adquiridos a la vez que minimizamos el coste computacional. La primera parte de esta tesis se centra en optimizar el sistema de adquisición, reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de “compressed sensing” (CS), que permite también reducir el número de datos necesarios sin por ello perder calidad de imagen. Las contribuciones principales de esta tesis son: 1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que dicho método ofrece un intervalo donde encontrar el parámetro de regularización. 2) Una vez encontrado el método automático de selección de parámetro de regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis de valores singulares (SVA), con la finalidad de maximizar la información contenida en los datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene mejores resultados que el uso convencional de una distribución isotrópica del mismo. 3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”. 4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT, llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso (co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la solución alcanzada en cada iteración a un espacio determinado, donde los valores singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de imagen, requiriendo menor número de parámetros de adquisición. Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos experimentales

    Topics in image reconstruction for high resolution positron emission tomography

    Get PDF
    Les problèmes mal posés représentent un sujet d'intérêt interdisciplinaire qui surgires dans la télédétection et des applications d'imagerie. Cependant, il subsiste des questions cruciales pour l'application réussie de la théorie à une modalité d'imagerie. La tomographie d'émission par positron (TEP) est une technique d'imagerie non-invasive qui permet d'évaluer des processus biochimiques se déroulant à l'intérieur d'organismes in vivo. La TEP est un outil avantageux pour la recherche sur la physiologie normale chez l'humain ou l'animal, pour le diagnostic et le suivi thérapeutique du cancer, et l'étude des pathologies dans le coeur et dans le cerveau. La TEP partage plusieurs similarités avec d'autres modalités d'imagerie tomographiques, mais pour exploiter pleinement sa capacité à extraire le maximum d'information à partir des projections, la TEP doit utiliser des algorithmes de reconstruction d'images à la fois sophistiquée et pratiques. Plusieurs aspects de la reconstruction d'images TEP ont été explorés dans le présent travail. Les contributions suivantes sont d'objet de ce travail: Un modèle viable de la matrice de transition du système a été élaboré, utilisant la fonction de réponse analytique des détecteurs basée sur l'atténuation linéaire des rayons y dans un banc de détecteur. Nous avons aussi démontré que l'utilisation d'un modèle simplifié pour le calcul de la matrice du système conduit à des artefacts dans l'image. (IEEE Trans. Nucl. Sei., 2000) );> La modélisation analytique de la dépendance décrite à l'égard de la statistique des images a simplifié l'utilisation de la règle d'arrêt par contre-vérification (CV) et a permis d'accélérer la reconstruction statistique itérative. Cette règle peut être utilisée au lieu du procédé CV original pour des projections aux taux de comptage élevés, lorsque la règle CV produit des images raisonnablement précises. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une méthodologie de régularisation utilisant la décomposition en valeur propre (DVP) de la matrice du système basée sur l'analyse de la résolution spatiale. L'analyse des caractéristiques du spectre de valeurs propres nous a permis d'identifier la relation qui existe entre le niveau optimal de troncation du spectre pour la reconstruction DVP et la résolution optimale dans l'image reconstruite. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une nouvelle technique linéaire de reconstruction d'image événement-par-événement basée sur la matrice pseudo-inverse régularisée du système. L'algorithme représente une façon rapide de mettre à jour une image, potentiellement en temps réel, et permet, en principe, la visualisation instantanée de distribution de la radioactivité durant l'acquisition des données tomographiques. L'image ainsi calculée est la solution minimisant les moindres carrés du problème inverse régularisé.Abstract: Ill-posed problems are a topic of an interdisciplinary interest arising in remote sensing and non-invasive imaging. However, there are issues crucial for successful application of the theory to a given imaging modality. Positron emission tomography (PET) is a non-invasive imaging technique that allows assessing biochemical processes taking place in an organism in vivo. PET is a valuable tool in investigation of normal human or animal physiology, diagnosing and staging cancer, heart and brain disorders. PET is similar to other tomographie imaging techniques in many ways, but to reach its full potential and to extract maximum information from projection data, PET has to use accurate, yet practical, image reconstruction algorithms. Several topics related to PET image reconstruction have been explored in the present dissertation. The following contributions have been made: (1) A system matrix model has been developed using an analytic detector response function based on linear attenuation of [gamma]-rays in a detector array. It has been demonstrated that the use of an oversimplified system model for the computation of a system matrix results in image artefacts. (IEEE Trans. Nucl. Sci., 2000); (2) The dependence on total counts modelled analytically was used to simplify utilisation of the cross-validation (CV) stopping rule and accelerate statistical iterative reconstruction. It can be utilised instead of the original CV procedure for high-count projection data, when the CV yields reasonably accurate images. (IEEE Trans. Nucl. Sci., 2001); (3) A regularisation methodology employing singular value decomposition (SVD) of the system matrix was proposed based on the spatial resolution analysis. A characteristic property of the singular value spectrum shape was found that revealed a relationship between the optimal truncation level to be used with the truncated SVD reconstruction and the optimal reconstructed image resolution. (IEEE Trans. Nucl. Sci., 2001); (4) A novel event-by-event linear image reconstruction technique based on a regularised pseudo-inverse of the system matrix was proposed. The algorithm provides a fast way to update an image potentially in real time and allows, in principle, for the instant visualisation of the radioactivity distribution while the object is still being scanned. The computed image estimate is the minimum-norm least-squares solution of the regularised inverse problem

    Utilizing higher-order basis functions for estimating the shape of metallic and dielectric objects

    Get PDF
    Представљена је квалитативна метода нумеричке електромагнетике за микроталасно формирање слике, која се ослања на решавање инверзног проблема расејања. У уводном делу дат је преглед литературе и укратко су дефинисане предности предложеног алгоритма у односу на већ постојеће методе. Након увода, дефинисани су основни постулати инверзних проблема и упоређени са добро познатом формулацијом директних електромагнетских проблема. Након тога, објашњене су потешкоће које настају при решавању инверзних проблема, односно показано је да су они у општем случају нелинеарни и некоректно постављени. Такође, детаљно је описана техника развоја по мултиполима као фундаментална алатка у аналитичкој електромагнетици, на којој се заснива приказана метода...An electromagnetic qualitative microwave imaging method, which relies on solving an inverse scattering problem, is presented. In the introductory part of this dissertation, the state-of-the-art is briefly summarized. Also, main advantages of the proposed method, compared to other known methods, are outlined. After the introduction, we define the basic idea of an inverse problem, compared to the well-known direct electromagnetic problem formulation. Then, we explain the main difficulties arising during an attempt to solve such an inverse problem, i.e., it is shown that these problems are generally non-linear and ill-posed. Also, the multipole expansion technique, as a fundamental tool in analytical electromagnetics, is described in detail..

    Microwave Imaging of Brain Stroke:Contributions to Modeling and Inverse Problem Resolution

    Get PDF
    Brain stroke is an age-related illness which has become a major issue in our ageing societies. Early diagnosis and treatment are of high importance for the full recovery of the patient, as reminded in Anglo-Saxon countries by the abbreviation FAST (Face, Arm, Speech, Time) referring to both the four major visible signs and the necessity to act fast. In this respect, Computed Tomography (CT) and Nuclear Magnetic Resonance (NMR) imaging are key diagnostic tools in clinical practice. Unfortunately, not only these modalities can neither be transported nor rapidly usable, which would allow early treatment (especially in rural environments), but also cannot be brought to the bedside of the patient to monitor the evolution of the disease. Microwave Imaging (MWI) is a potential candidate to provide fast and accurate diagnostic insights for brain stroke pathological states. The head of the patient is illuminated with low-power microwave waveforms (non-ionizing radiations), whose backscattered signals are used to generate either images of its internal structures, distributions, patterns and shapes (qualitative imaging) or directly its physical parameters such as the dielectric contrast and the permittivity values (quantitative imaging). The technology relies on the high sensitivity of microwaves on the water content of tissues to allow for the discrimination between pathological and healthy regions. This thesis focuses on both the forward modeling of the electromagnetic phenomena arising in biological tissues and the inverse scattering problem for imaging in the differential MWI (dMWI) scenario for brain stroke monitoring. It is intrinsically interdisciplinary as it requires knowledge in Biology, Medicine, Physics, Chemistry, and Engineering. In order to investigate the challenges arising in brain MWI, it is crucial to have accurate and efficient solvers to model electromagnetic (EM) fields at UHF/SHF-bands. The head is a distributed, heterogeneous, and lossy scatterer for which existing solvers are known to struggle at higher frequencies. Volume Integral Equation (VIE) formulations and MultiGrid (MG) approaches are investigated to find the actual solution of the field distributions for large scale problems. The EM modeling also permits to analyze the feasibility of brain MWI, which depends on the power transmission from the antennas towards the human brain. In order to estimate this transmission, simplified but still representative models, including intermediate layers -skin, fat, bone, and CerebroSpinal Fluid (CSF) - of the head, are proposed in the framework of simulations (analytical tools) and experimental validations (3D printed head phantom). For the imaging task, the physics of the EM scattering, leads to complex non-linear inverse scattering problems (consisting in retrieving from a set of field measurements the physical parameters which produced them) for which reliable assumptions and approximations must be found. For brain MWI, estimating and quantifying the degree of non-linearity allows for determining the scope of application of existing algorithms, for which different regularizers are applied. Modeling and inverse problem resolution for brain MWI investigated in the present work are ultimately meant to contribute to the development of a technology dedicated to brain stroke detection, differentiation, and monitoring

    Spectrally efficient FDM communication signals and transceivers: design, mathematical modelling and system optimization

    Get PDF
    This thesis addresses theoretical, mathematical modelling and design issues of Spectrally Efficient FDM (SEFDM) systems. SEFDM systems propose bandwidth savings when compared to Orthogonal FDM (OFDM) systems by multiplexing multiple non-orthogonal overlapping carriers. Nevertheless, the deliberate collapse of orthogonality poses significant challenges on the SEFDM system in terms of performance and complexity, both issues are addressed in this work. This thesis first investigates the mathematical properties of the SEFDM system and reveals the links between the system conditioning and its main parameters through closed form formulas derived for the Intercarrier Interference (ICI) and the system generating matrices. A rigorous and efficient mathematical framework, to represent non-orthogonal signals using Inverse Discrete Fourier Transform (IDFT) blocks, is proposed. This is subsequently used to design simple SEFDM transmitters and to realize a new Matched Filter (MF) based demodulator using the Discrete Fourier Transforms (DFT), thereby substantially simplifying the transmitter and demodulator design and localizing complexity at detection stage with no premium at performance. Operation is confirmed through the derivation and numerical verification of optimal detectors in the form of Maximum Likelihood (ML) and Sphere Decoder (SD). Moreover, two new linear detectors that address the ill conditioning of the system are proposed: the first based on the Truncated Singular Value Decomposition (TSVD) and the second accounts for selected ICI terms and termed Selective Equalization (SelE). Numerical investigations show that both detectors substantially outperform existing linear detection techniques. Furthermore, the use of the Fixed Complexity Sphere Decoder (FSD) is proposed to further improve performance and avoid the variable complexity of the SD. Ultimately, a newly designed combined FSD-TSVD detector is proposed and shown to provide near optimal error performance for bandwidth savings of 20% with reduced and fixed complexity. The thesis also addresses some practical considerations of the SEFDM systems. In particular, mathematical and numerical investigations have shown that the SEFDM signal is prone to high Peak to Average Power Ratio (PAPR) that can lead to significant performance degradations. Investigations of PAPR control lead to the proposal of a new technique, termed SLiding Window (SLW), utilizing the SEFDM signal structure which shows superior efficacy in PAPR control over conventional techniques with lower complexity. The thesis also addresses the performance of the SEFDM system in multipath fading channels confirming favourable performance and practicability of implementation. In particular, a new Partial Channel Estimator (PCE) that provides better estimation accuracy is proposed. Furthermore, several low complexity linear and iterative joint channel equalizers and symbol detectors are investigated in fading channels conditions with the FSD-TSVD joint equalization and detection with PCE obtained channel estimate facilitating near optimum error performance, close to that of OFDM for bandwidth savings of 25%. Finally, investigations of the precoding of the SEFDM signal demonstrate a potential for complexity reduction and performance improvement. Overall, this thesis provides the theoretical basis from which practical designs are derived to pave the way to the first practical realization of SEFDM systems

    FPGA Acceleration of Domain-specific Kernels via High-Level Synthesis

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Récepteur itératif pour les systèmes MIMO-OFDM basé sur le décodage sphérique : convergence, performance et complexité

    Get PDF
    Recently, iterative processing has been widely considered to achieve near-capacity performance and reliable high data rate transmission, for future wireless communication systems. However, such an iterative processing poses significant challenges for efficient receiver design. In this thesis, iterative receiver combining multiple-input multiple-output (MIMO) detection with channel decoding is investigated for high data rate transmission. The convergence, the performance and the computational complexity of the iterative receiver for MIMO-OFDM system are considered. First, we review the most relevant hard-output and soft-output MIMO detection algorithms based on sphere decoding, K-Best decoding, and interference cancellation. Consequently, a low-complexity K-best (LCK- Best) based decoder is proposed in order to substantially reduce the computational complexity without significant performance degradation. We then analyze the convergence behaviors of combining these detection algorithms with various forward error correction codes, namely LTE turbo decoder and LDPC decoder with the help of Extrinsic Information Transfer (EXIT) charts. Based on this analysis, a new scheduling order of the required inner and outer iterations is suggested. The performance of the proposed receiver is evaluated in various LTE channel environments, and compared with other MIMO detection schemes. Secondly, the computational complexity of the iterative receiver with different channel coding techniques is evaluated and compared for different modulation orders and coding rates. Simulation results show that our proposed approaches achieve near optimal performance but more importantly it can substantially reduce the computational complexity of the system. From a practical point of view, fixed-point representation is usually used in order to reduce the hardware costs in terms of area, power consumption and execution time. Therefore, we present efficient fixed point arithmetic of the proposed iterative receiver based on LC-KBest decoder. Additionally, the impact of the channel estimation on the system performance is studied. The proposed iterative receiver is tested in a real-time environment using the MIMO WARP platform.Pour permettre l’accroissement de débit et de robustesse dans les futurs systèmes de communication sans fil, les processus itératifs sont de plus considérés dans les récepteurs. Cependant, l’adoption d’un traitement itératif pose des défis importants dans la conception du récepteur. Dans cette thèse, un récepteur itératif combinant les techniques de détection multi-antennes avec le décodage de canal est étudié. Trois aspects sont considérés dans un contexte MIMOOFDM: la convergence, la performance et la complexité du récepteur. Dans un premier temps, nous étudions les différents algorithmes de détection MIMO à décision dure et souple basés sur l’égalisation, le décodage sphérique, le décodage K-Best et l’annulation d’interférence. Un décodeur K-best de faible complexité (LC-K-Best) est proposé pour réduire la complexité sans dégradation significative des performances. Nous analysons ensuite la convergence de la combinaison de ces algorithmes de détection avec différentes techniques de codage de canal, notamment le décodeur turbo et le décodeur LDPC en utilisant le diagramme EXIT. En se basant sur cette analyse, un nouvel ordonnancement des itérations internes et externes nécessaires est proposé. Les performances du récepteur ainsi proposé sont évaluées dans différents modèles de canal LTE, et comparées avec différentes techniques de détection MIMO. Ensuite, la complexité des récepteurs itératifs avec différentes techniques de codage de canal est étudiée et comparée pour différents modulations et rendement de code. Les résultats de simulation montrent que les approches proposées offrent un bon compromis entre performance et complexité. D’un point de vue implémentation, la représentation en virgule fixe est généralement utilisée afin de réduire les coûts en termes de surface, de consommation d’énergie et de temps d’exécution. Nous présentons ainsi une représentation en virgule fixe du récepteur itératif proposé basé sur le décodeur LC K-Best. En outre, nous étudions l’impact de l’estimation de canal sur la performance du système. Finalement, le récepteur MIMOOFDM itératif est testé sur la plateforme matérielle WARP, validant le schéma proposé
    corecore