191 research outputs found

    Topics in image reconstruction for high resolution positron emission tomography

    Get PDF
    Les problèmes mal posés représentent un sujet d'intérêt interdisciplinaire qui surgires dans la télédétection et des applications d'imagerie. Cependant, il subsiste des questions cruciales pour l'application réussie de la théorie à une modalité d'imagerie. La tomographie d'émission par positron (TEP) est une technique d'imagerie non-invasive qui permet d'évaluer des processus biochimiques se déroulant à l'intérieur d'organismes in vivo. La TEP est un outil avantageux pour la recherche sur la physiologie normale chez l'humain ou l'animal, pour le diagnostic et le suivi thérapeutique du cancer, et l'étude des pathologies dans le coeur et dans le cerveau. La TEP partage plusieurs similarités avec d'autres modalités d'imagerie tomographiques, mais pour exploiter pleinement sa capacité à extraire le maximum d'information à partir des projections, la TEP doit utiliser des algorithmes de reconstruction d'images à la fois sophistiquée et pratiques. Plusieurs aspects de la reconstruction d'images TEP ont été explorés dans le présent travail. Les contributions suivantes sont d'objet de ce travail: Un modèle viable de la matrice de transition du système a été élaboré, utilisant la fonction de réponse analytique des détecteurs basée sur l'atténuation linéaire des rayons y dans un banc de détecteur. Nous avons aussi démontré que l'utilisation d'un modèle simplifié pour le calcul de la matrice du système conduit à des artefacts dans l'image. (IEEE Trans. Nucl. Sei., 2000) );> La modélisation analytique de la dépendance décrite à l'égard de la statistique des images a simplifié l'utilisation de la règle d'arrêt par contre-vérification (CV) et a permis d'accélérer la reconstruction statistique itérative. Cette règle peut être utilisée au lieu du procédé CV original pour des projections aux taux de comptage élevés, lorsque la règle CV produit des images raisonnablement précises. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une méthodologie de régularisation utilisant la décomposition en valeur propre (DVP) de la matrice du système basée sur l'analyse de la résolution spatiale. L'analyse des caractéristiques du spectre de valeurs propres nous a permis d'identifier la relation qui existe entre le niveau optimal de troncation du spectre pour la reconstruction DVP et la résolution optimale dans l'image reconstruite. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposé une nouvelle technique linéaire de reconstruction d'image événement-par-événement basée sur la matrice pseudo-inverse régularisée du système. L'algorithme représente une façon rapide de mettre à jour une image, potentiellement en temps réel, et permet, en principe, la visualisation instantanée de distribution de la radioactivité durant l'acquisition des données tomographiques. L'image ainsi calculée est la solution minimisant les moindres carrés du problème inverse régularisé.Abstract: Ill-posed problems are a topic of an interdisciplinary interest arising in remote sensing and non-invasive imaging. However, there are issues crucial for successful application of the theory to a given imaging modality. Positron emission tomography (PET) is a non-invasive imaging technique that allows assessing biochemical processes taking place in an organism in vivo. PET is a valuable tool in investigation of normal human or animal physiology, diagnosing and staging cancer, heart and brain disorders. PET is similar to other tomographie imaging techniques in many ways, but to reach its full potential and to extract maximum information from projection data, PET has to use accurate, yet practical, image reconstruction algorithms. Several topics related to PET image reconstruction have been explored in the present dissertation. The following contributions have been made: (1) A system matrix model has been developed using an analytic detector response function based on linear attenuation of [gamma]-rays in a detector array. It has been demonstrated that the use of an oversimplified system model for the computation of a system matrix results in image artefacts. (IEEE Trans. Nucl. Sci., 2000); (2) The dependence on total counts modelled analytically was used to simplify utilisation of the cross-validation (CV) stopping rule and accelerate statistical iterative reconstruction. It can be utilised instead of the original CV procedure for high-count projection data, when the CV yields reasonably accurate images. (IEEE Trans. Nucl. Sci., 2001); (3) A regularisation methodology employing singular value decomposition (SVD) of the system matrix was proposed based on the spatial resolution analysis. A characteristic property of the singular value spectrum shape was found that revealed a relationship between the optimal truncation level to be used with the truncated SVD reconstruction and the optimal reconstructed image resolution. (IEEE Trans. Nucl. Sci., 2001); (4) A novel event-by-event linear image reconstruction technique based on a regularised pseudo-inverse of the system matrix was proposed. The algorithm provides a fast way to update an image potentially in real time and allows, in principle, for the instant visualisation of the radioactivity distribution while the object is still being scanned. The computed image estimate is the minimum-norm least-squares solution of the regularised inverse problem

    Statistical learning for predictive targeting in online advertising

    Get PDF

    Learning Graphical Models of Multivariate Functional Data with Applications to Neuroimaging

    Get PDF
    This dissertation investigates the functional graphical models that infer the functional connectivity based on neuroimaging data, which is noisy, high dimensional and has limited samples. The dissertation provides two recipes to infer the functional graphical model: 1) a fully Bayesian framework 2) an end-to-end deep model. We first propose a fully Bayesian regularization scheme to estimate functional graphical models. We consider a direct Bayesian analog of the functional graphical lasso proposed by Qiao et al. (2019).. We then propose a regularization strategy via the graphical horseshoe. We compare both Bayesian approaches to the frequentist functional graphical lasso, and compare the Bayesian functional graphical lasso to the functional graphical horseshoe. We applied the proposed methods with electroencephalography (EEG) data and diffusion tensor imaging (DTI) data. We find that the Bayesian methods tend to outperform the standard functional graphical lasso, and that the functional graphical horseshoe performs best overall, a procedure for which there is no direct frequentist analog. Then we consider a deep neural network architecture to estimate functional graphical models, by combining two simple off-the-shelf algorithms: adaptive functional principal components analysis (FPCA) Yao et al., 2021a) and convolutional graph estimator (Belilovsky et al., 2016). We train our proposed model with synthetic data which emulate the real world observations and prior knowledge. Based on synthetic data generation process, our model convert an inference problem as a supervised learning problem. Compared with other framework, our proposed deep model which offers a general recipe to infer the functional graphical model based on data-driven approach, take the raw functional dataset as input and avoid deriving sophisticated closed-form. Through simulation studies, we find that our deep functional graph model trained on synthetic data generalizes well and outperform other popular baselines marginally. In addition, we apply deep functional graphical model in the real world EEG data, and our proposed model discover meaningful brain connectivity. Finally, we are interested in estimating casual graph with functional input. In order to process functional covariates in causal estimation, we leverage the similar strategy as our deep functional graphical model. We extend popular deep causal models to infer causal effects with functional confoundings within the potential outcomes framework. Our method is simple yet effective, where we validate our proposed architecture in variety of simulation settings. Our work offers an alternative way to do causal inference with functional data

    Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework

    Get PDF
    The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications

    Hierarchical bayesian models for genome-wide association studies

    Get PDF
    I consider a well-known problem in the field of statistical genetics called a genome-wide association study (GWAS) where the goal is to identify a set of genetic markers that are associated to a disease. A typical GWAS data set contains, for thousands of unrelated individuals, a set of hundreds of thousands of markers, a set of other covariates such as age, gender, smoking status and other risk factors, and a response variable that indicates the presence or absence of a particular disease. Due to biological phenomena such as the recombination of DNA and linkage disequilibrium, parents are more likely to pass parts of DNA that lie close to each other on a chromosome together to their offspring; this non-random association between adjacent markers leads to strong correlation between markers in GWAS data sets. As a statistician, I reduce the complex problem of GWAS to its essentials, i.e. variable selection on a large-p-small-n data set that exhibits multicollinearity, and develop solutions that complement and advance the current state-of-the-art methods. Before outlining and explaining my contributions to the field in detail, I present a literature review that summarizes the history of GWAS and the relevant tools and techniques that researchers have developed over the years for this problem

    Estimation of the Image Quality in Emission Tomography: Application to Optimization of SPECT System Design

    Get PDF
    In Emission Tomography the design of the Imaging System has a great influence on the quality of the output image. Optimisation of the system design is a difficult problem due to the computational complexity and to the challenges in its mathematical formulation. In order to compare different system designs, an efficient and effective method to calculate the Image Quality is needed. In this thesis the statistical and deterministic methods for the calculation of the uncertainty in the reconstruction are presented. In the deterministic case, the Fisher Information Matrix (FIM) formalism can be employed to characterize such uncertainty. Unfortunately, computing, storing and inverting the FIM is not feasible with 3D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM a novel approximation, that relies on a sub-sampling of the FIM, is proposed. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. This formulation reduces the computational complexity in inverting the FIM but nevertheless accounts for the global interdependence between the variables, for the acquisition geometry and for the object dependency. Using this approach, the noise properties as a function of the system geometry parameterisation were investigated for three different cases. In the first study, the design of a parallel-hole collimator for SPECT is optimised. The new method can be applied to evaluating problems like trading-off collimator resolution and sensitivity. In the second study, the reconstructed image quality was evaluated in the case of truncated projection data; showing how the subsampling approach is very accurate for evaluating the effects of missing data. Finally, the noise properties of a D-SPECT system were studied for varying acquisition protocols; showing how the new method is well-suited to problems like optimising adaptive data sampling schemes

    Side information in robust principal component analysis: algorithms and applications

    Get PDF
    Dimensionality reduction and noise removal are fundamental machine learning tasks that are vital to artificial intelligence applications. Principal component analysis has long been utilised in computer vision to achieve the above mentioned goals. Recently, it has been enhanced in terms of robustness to outliers in robust principal component analysis. Both convex and non-convex programs have been developed to solve this new formulation, some with exact convergence guarantees. Its effectiveness can be witnessed in image and video applications ranging from image denoising and alignment to background separation and face recognition. However, robust principal component analysis is by no means perfect. This dissertation identifies its limitations, explores various promising options for improvement and validates the proposed algorithms on both synthetic and real-world datasets. Common algorithms approximate the NP-hard formulation of robust principal component analysis with convex envelopes. Though under certain assumptions exact recovery can be guaranteed, the relaxation margin is too big to be squandered. In this work, we propose to apply gradient descent on the Burer-Monteiro bilinear matrix factorisation to squeeze this margin given available subspaces. This non-convex approach improves upon conventional convex approaches both in terms of accuracy and speed. On the other hand, oftentimes there is accompanying side information when an observation is made. The ability to assimilate such auxiliary sources of data can ameliorate the recovery process. In this work, we investigate in-depth such possibilities for incorporating side information in restoring the true underlining low-rank component from gross sparse noise. Lastly, tensors, also known as multi-dimensional arrays, represent real-world data more naturally than matrices. It is thus advantageous to adapt robust principal component analysis to tensors. Since there is no exact equivalence between tensor rank and matrix rank, we employ the notions of Tucker rank and CP rank as our optimisation objectives. Overall, this dissertation carefully defines the problems when facing real-world computer vision challenges, extensively and impartially evaluates the state-of-the-art approaches, proposes novel solutions and provides sufficient validations on both simulated data and popular real-world datasets for various mainstream computer vision tasks.Open Acces

    Principles of Neural Network Architecture Design - Invertibility and Domain Knowledge

    Get PDF
    Neural networks architectures allow a tremendous variety of design choices. In this work, we study two principles underlying these architectures: First, the design and application of invertible neural networks (INNs). Second, the incorporation of domain knowledge into neural network architectures. After introducing the mathematical foundations of deep learning, we address the invertibility of standard feedforward neural networks from a mathematical perspective. These results serve as a motivation for our proposed invertible residual networks (i-ResNets). This architecture class is then studied in two scenarios: First, we propose ways to use i-ResNets as a normalizing flow and demonstrate the applicability for high-dimensional generative modeling. Second, we study the excessive invariance of common deep image classifiers and discuss consequences for adversarial robustness. We finish with a study of convolutional neural networks for tumor classification based on imaging mass spectrometry (IMS) data. For this application, we propose an adapted architecture guided by our knowledge of the domain of IMS data and show its superior performance on two challenging tumor classification datasets

    Advanced Probabilistic Models for Clustering and Projection

    Get PDF
    Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems. The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm. In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process. In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection
    corecore