77 research outputs found

    Investigating complex networks with inverse models: analytical aspects of spatial leakage and connectivity estimation

    Full text link
    Network theory and inverse modeling are two standard tools of applied physics, whose combination is needed when studying the dynamical organization of spatially distributed systems from indirect measurements. However, the associated connectivity estimation may be affected by spatial leakage, an artifact of inverse modeling that limits the interpretability of network analysis. This paper investigates general analytical aspects pertaining to this issue. First, the existence of spatial leakage is derived from the topological structure of inverse operators. Then, the geometry of spatial leakage is modeled and used to define a geometric correction scheme, which limits spatial leakage effects in connectivity estimation. Finally, this new approach for network analysis is compared analytically to existing methods based on linear regressions, which are shown to yield biased coupling estimates.Comment: 19 pages, 4 figures, including 5 appendices; v2: minor edits, 1 appendix added; v3: expanded version, v4: minor edit

    Approaching the Problem of Time with a Combined Semiclassical-Records-Histories Scheme

    Full text link
    I approach the Problem of Time and other foundations of Quantum Cosmology using a combined histories, timeless and semiclassical approach. This approach is along the lines pursued by Halliwell. It involves the timeless probabilities for dynamical trajectories entering regions of configuration space, which are computed within the semiclassical regime. Moreover, the objects that Halliwell uses in this approach commute with the Hamiltonian constraint, H. This approach has not hitherto been considered for models that also possess nontrivial linear constraints, Lin. This paper carries this out for some concrete relational particle models (RPM's). If there is also commutation with Lin - the Kuchar observables condition - the constructed objects are Dirac observables. Moreover, this paper shows that the problem of Kuchar observables is explicitly resolved for 1- and 2-d RPM's. Then as a first route to Halliwell's approach for nontrivial linear constraints that is also a construction of Dirac observables, I consider theories for which Kuchar observables are formally known, giving the relational triangle as an example. As a second route, I apply an indirect method that generalizes both group-averaging and Barbour's best matching. For conceptual clarity, my study involves the simpler case of Halliwell 2003 sharp-edged window function. I leave the elsewise-improved softened case of Halliwell 2009 for a subsequent Paper II. Finally, I provide comments on Halliwell's approach and how well it fares as regards the various facets of the Problem of Time and as an implementation of QM propositions.Comment: An improved version of the text, and with various further references. 25 pages, 4 figure

    Stability and dimensionality reduction in nonlinear filtering

    Get PDF
    The focus of this thesis is the analysis of the stability and robustness of continuous-time, finite state-space nonlinear filters, in order to provide new and practically relevant quantitative error bounds for a general class of approximate filters. This analysis is carried out through the use of the Hilbert projective metric. We begin by providing a self-contained introduction to the Hilbert metric and its fundamental properties, with a particular focus on the space of probability measures. We then derive and study various dual formulations, and exploit these to obtain a contraction result for linear operators on convex cones with respect to a new distance, the hyperbolic tangent of the Hilbert metric. This general observation directs us naturally towards a range of new results on stability and robustness in nonlinear filtering. Specifically, we turn to the problem of estimating the state of a continuous-time Markov chain from noisy observations. As regards stability, our key contribution is a proof that the corresponding optimal filter, called the Wonham filter, is contracting pathwise in the aforementioned distance given by the hyperbolic tangent of the Hilbert metric. Moreover, we give explicit deterministic and pathwise rates of convergence. By utilising these results, we are able to take an alternative approach to the study of the robustness of the Wonham filter, thereby improving on known error estimates and deriving rigorous, computable error bounds of theoretical and practical relevance as concerns the analysis and implementation of approximate filters. Finally, we consider the problem of reducing the dimensionality of the Wonham filter via geometric projections, with a view towards defining an optimal projection filter. Building on the intuition provided by our error bounds, we find a natural submanifold for the Wonham filter such that the error of the projection filter is minimized

    Data harmonization in PET imaging

    Get PDF
    Medical imaging physics has advanced a lot in recent years, providing clinicians and researchers with increasingly detailed images that are well suited to be analyzed with a quantitative approach typical of hard sciences, based on measurements and analysis of clinical interest quantities extracted from images themselves. Such an approach is placed in the context of quantitative imaging. The possibility of sharing data quickly, the development of machine learning and data mining techniques, the increasing availability of computational power and digital data storage which characterize this age constitute a great opportunity for quantitative imaging studies. The interest in large multicentric databases that gather images from single research centers is growing year after year. Big datasets offer very interesting research perspectives, primarily because they allow to increase statistical power of studies. At the same time, they raised a compatibility issue between data themselves. Indeed images acquired with different scanners and protocols could be very different about quality and measures extracted from images with different quality might be not compatible with each other. Harmonization techniques have been developed to circumvent this problem. Harmonization refers to all efforts to combine data from different sources and provide users with a comparable view of data from different studies. Harmonization can be done before acquiring data, by choosing a-priori appropriate acquisition protocols through a preliminary joint effort between research centers, or it can be done a-posteriori i.e. images are grouped into a single dataset and then any effects on measures caused by technical acquisition factors are removed. Although the a-priori harmonization guarantees best results, it is not often used for practical and/or technical reasons. In this thesis I will focus on a-posteriori harmonization. It is important to note that when we consider multicentric studies, in addition to the technical variability related to scanners and acquisition protocols, there may be a demographic variability that makes single centers samples not statistically equivalent to each other. The wide individual variability that characterize human beings, even more pronounced when patients are enrolled from very different geographical areas, can certainly exacerbate this issue. In addition, we must consider that biological processes are complex phenomena: quantitative imaging measures can be affected by numerous confounding demographic variables even apparently unrelated to measures themselves. A good harmonization method should be able to preserve inter-individual variability and remove at the same time all the effects due acquisition technical factors. Heterogene ity in acquisition together with a great inter-individual variability make harmonization very hard to achieve. Harmonization methods currently used in literature are able to preserve only the inter-subjects variability described by a set of known confounding variables, while all the unknown confounding variables are wrongly removed. This might lead to incorrect harmonization, especially if the unknown confounders play an important role. This issue is emphasized in practice, as sometimes happens that demographic variables that are known to play a major role are unknown. The final goal of my thesis is a proposal for an harmonization method developed in the context of amyloid Positron Emission Tomography (PET) which aim to remove the effects of variability induced by technical factors and at the same time are able to keep all the inter-individual differences. Since knowing all the demographic confounders is almost impossible, both practically and a theoretically, my proposal does not require the knowledge of these variables. The main point is to characterize image quality through a set of quality measures evaluated in regions of interest (ROIs) which are required to be as independent as possible from anatomical and clinical variability in order to exclusively highlight the effect of technical factors on images texture. Ideally, this allows to decouple the between-subjects variability from the technical ones: the latter can be directly removed while the former is automatically preserved. Specifically, I defined and validated 3 quality measures based on images texture properties. In addition I used a quality metric already existing, and I considered the reconstruction matrix dimension to take into account image resolution. My work has been performed using a multicentric dataset consisting of 1001 amyloid PET images. Before dealing specifically with harmonization, I handled some important issues: I built a relational database to organize and manage data and then I developed an automated algorithm for images pre-processing to achieve registration and quantification. This work might also be used in other imaging contexts: in particular I believe it could be applied in fluorodeoxyglucose (FDG) PET and tau PET. The consequences of harmonization I developed have been explored at a preliminary level. My proposal should be considered as a starting point as I mainly dealt with the issues of quality measures, while the harmonization of the variables in itself was done with a linear regression model. Although harmonization through linear models is often used, more sophisticated techniques are present in literature. It would be interesting to combine them with my work. Further investigations would be desirable in future

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Structures of Multivariate Dependence

    Get PDF
    The investigation of dependence structures plays a major role in contemporary statistics. During the last decades, numerous dependence measures for both univariate and multivariate random variables have been established. In this thesis, we study the distance correlation coefficient, a novel measure of dependence for random vectors of arbitrary dimension, which has been introduced by Szekely, Rizzo and Bakirov and Szekely and Rizzo. In particular, we define an affinely invariant version of distance correlation and calculate this coefficient for numerous distributions: for the bivariate and the multivariate normal distribution, for the multivariate Laplace and for certain bivariate gamma and Poisson distributions. Moreover, we present a useful series representation of distance covariance for the class of Lancaster distributions and derive a generalization of an integral, which plays a fundamental role in the theory of distance correlation. We further investigate a variable clustering problem, which arises in low rank Gaussian graphical models. In the case of fixed sample size, we discover that this problem is mathematically equivalent to the subspace clustering problem of data for independent subspaces. In the asymptotic setting, we derive an estimator, which consistently recovers the cluster structure in the case of noisy data
    • …
    corecore