43 research outputs found

    N-ary Mathematical Morphology

    No full text
    International audienceMathematical morphology on binary images can be fully de-scribed by set theory. However, it is not sucient to formulate mathe-matical morphology for grey scale images. This type of images requires the introduction of the notion of partial order of grey levels, together with the denition of sup and inf operators. More generally, mathemati-cal morphology is now described within the context of the lattice theory. For a few decades, attempts are made to use mathematical morphology on multivariate images, such as color images, mainly based on the no-tion of vector order. However, none of these attempts has given fully satisfying results. Instead of aiming directly at the multivariate case we propose an extension of mathematical morphology to an intermediary situation: images composed of a nite number of independent unordered categories

    Computing Histogram of Tensor Images using Orthogonal Series Density Estimation and Riemannian Metrics

    No full text
    This paper deals with the computation of the histogram of tensor images, that is, images where at each pixel is given a n by n positive definite symmetric matrix, SPD(n). An approach based on orthogonal series density estimation is introduced, which is particularly useful for the case of measures based on Riemannian metrics. By considering SPD(n) as the space of the covariance matrices of multivariate gaussian distributions, we obtain the corresponding density estimation for the measure of both the Fisher metric and the Wasserstein metric. Experimental results on the application of such histogram estimation to DTI image segmentation, texture segmentation and texture recognition are included

    Wang-Landau Algorithm: an adapted random walk to boost convergence

    Get PDF
    International audienceThe Wang-Landau (WL) algorithm is a recently developed stochastic algorithm computing densities of states of a physical system. Since its inception, it has been used on a variety of (bio-)physical systems, and in selected cases, its convergence has been proved. The convergence speed of the algorithm is tightly tied to the connectivity properties of the underlying random walk. As such, we propose an efficient random walk that uses geometrical information to circumvent the following inherent difficulties: avoiding overstepping strata, toning down concentration phenomena in high-dimensional spaces, and accommodating multidimensional distribution. Experiments on various models stress the importance of these improvements to make WL effective in challenging cases. Altogether, these improvements make it possible to compute density of states for regions of the phase space of small biomolecules.L’algorithme de Wang-Landau est un algorithme stochastique récemment développé calculant la densité d’états pour des systèmes physiques. Depuis sa création, il a été utilisé sur des systèmes (bio-)physiques. Dans certain cas, sa convergence a été prouvée. La vitesse de convergence de l’algorithme est intimement liée aux propriétés de connectivité de la marche aléatoire sous-jacente. Nous proposons ici une marche aléatoire efficace utilisant des informations géométriques pour prévenir les difficultés suivantes: passer par dessus des strates, atténuer les phénomènes de concentration de la mesure en grande dimension, et gérer les distributions multimodales. Les expériences numériques sur différents modèles démontrent l’importance de ces améliorations pour rendre WL efficace dans des cas complexes. In fine, ces améliorations rendent possible le calcul de densité d’état pour des régions de l’espace des phases de petite bio-molécules

    A generic software framework for Wang-Landau type algorithms

    Get PDF
    The Wang-Landau (WL) algorithm is a stochastic algorithm designed to compute densities of states of a physical system. Is has also been recently used to perform challenging numerical integration in high-dimensional spaces. Using WL requires specifying the system handled, the proposal to explore the definition domain, and the measured against which one integrates. Additionally, several design options related to the learning rate must be provided. This work presents the first generic (C++) implementation providing all such ingredients. The versatility of the framework is illustrated with a variety of problems including the computation of density of states of physical systems and biomolecules, and the computation of high dimensional integrals. Along the way, we that integrating against a Boltzmann like measure to estimate DoS with respect to the Lebesgue measure can be beneficial. We anticipate that our implementation, available in the Structural Bioinformatics Library (http: //sbl.inria.fr), will leverage experiments on complex systems and contribute to unravel free energy calculations for (bio-)molecular systems

    Efficient computation of the volume of a polytope in high-dimensions using Piecewise Deterministic Markov Processes

    Get PDF
    Computing the volume of a polytope in high dimensions is computationally challenging but has wide applications. Current state-of-the-art algorithms to compute such volumes rely on efficient sampling of a Gaussian distribution restricted to the polytope, using e.g. Hamiltonian Monte Carlo. We present a new sampling strategy that uses a Piecewise Deterministic Markov Process. Like Hamiltonian Monte Carlo, this new method involves simulating trajectories of a non-reversible process and inherits similar good mixing properties. However, importantly, the process can be simulated more easily due to its piecewise linear trajectories - and this leads to a reduction of the computational cost by a factor of the dimension of the space. Our experiments indicate that our method is numerically robust and is one order of magnitude faster (or better) than existing methods using Hamiltonian Monte Carlo. On a single core processor, we report computational time of a few minutes up to dimension 500

    Hamiltonian Monte Carlo avec réflexions, et application au calcul du volume de polytopes

    Get PDF
    This paper studies HMC with reflections on the boundary of a domain, providing an enhanced alternative to Hit-and-run (HAR) to sample a target distribution in a bounded domain. We make three contributions. First, we provide a convergence bound, paving the way to more precise mixing time analysis. Second, we present a robust implementation based on multi-precision arithmetic – a mandatory ingredient to guarantee exact predicates and robust constructions. Third, we use our HMC random walk to perform polytope volume calculations, using it as an alternative to HAR within the volume algorithm by Cousins and Vempala. The tests, conducted up to dimension 50, show that the HMC RW outperforms HAR.Ce papier étudie HMC avec réflexions au bord du domaine, donnant une meilleure alternative a Hit-and-Run (HAR) pour échantillonner une distribution cible dans un domaine borné. Nous apportons trois contributions. Premièrement, nous prouvons une borne de convergence, préparant le terrain pour une analyse plus précise du mixing time. Deuxièmement, nous produisons une implémentation robuste basée sur l’arithmétique multi-precision. Troisièmement, nous utilisons HMC avec réflexions comme une alternative à HAR pour calculer le volume de polytopes pour l’algorithme de Cousins et Vempala. Les tests, conduits jusqu’en dimension 50 montrent que HMC avec réflexions est plus performant que HAR

    Amélioration des calculs de volume de polytope basés sur le Monte Carlo Hamiltonien avec des réflexions sur les bords et des arithmétiques édulcorées

    Get PDF
    International audienceComputing the volume of a high dimensional polytope is a fundamental problem in geometry, also connected to the calculation of densities of states in statistical physics, and a central building block of such algorithms is the method used to sample a target probability distribution. This paper studies Hamiltonian Monte Carlo (HMC) with reflections on the boundary of a domain, providing an enhanced alternative to Hit-and-run (HAR) to sample a target distribution restricted to the polytope. We make three contributions. First, we provide a convergence bound, paving the way to more precise mixing time analysis. Second, we present a robust implementation based on multi-precision arithmetic, a mandatory ingredient to guarantee exact predicates and robust constructions. We however allow controlled failures to happen, introducing the Sweeten Exact Geometric Computing (SEGC) paradigm. Third, we use our HMC random walk to perform H-polytope volume calculations, using it as an alternative to HAR within the volume algorithm by Cousins and Vempala. The systematic tests conducted up to dimension nn = 100 on the cube, the isotropic and the standard simplex show that HMC significantly outperforms HAR both in terms of accuracy and running time. Additional tests show that calculations may be handled up to dimension nn = 500. These tests also establish that multiprecision is mandatory to avoid exits from the polytope

    Efficient computation of the volume of a polytope in high-dimensions using Piecewise Deterministic Markov Processes

    Get PDF
    International audienceComputing the volume of a polytope in high dimensions is computationally challenging but has wide applications. Current state-of-the-art algorithms to compute such volumes rely on efficient sampling of a Gaussian distribution restricted to the polytope, using e.g. Hamiltonian Monte Carlo. We present a new sampling strategy that uses a Piecewise Deterministic Markov Process. Like Hamiltonian Monte Carlo, this new method involves simulating trajectories of a non-reversible process and inherits similar good mixing properties. However, importantly, the process can be simulated more easily due to its piecewise linear trajectories-and this leads to a reduction of the computational cost by a factor of the dimension of the space. Our experiments indicate that our method is numerically robust and is one order of magnitude faster (or better) than existing methods using Hamiltonian Monte Carlo. On a single core processor, we report computational time of a few minutes up to dimension 500
    corecore