42 research outputs found

    Geometry of discrete and continuous bounded surfaces

    Get PDF
    We work on reconstructing discrete and continuous surfaces with boundaries using length constraints. First, for a bounded discrete surface, we discuss the rigidity and number of embeddings in three-dimensional space, modulo rigid transformations, for given real edge lengths. Our work mainly considers the maximal number of embeddings of rigid graphs in three-dimensional space for specific geometries (annulus, strip). We modify a commonly used semi-algebraic, geometrical formulation using BĂ©zout\u27s theorem, from Euclidean distances corresponding to edge lengths. We suggest a simple way to construct a rigid graph having a finite upper bound. We also implement a generalization of counting embeddings for graphs by segmenting multiple rigid graphs in d-dimensional space. Our computational methodology uses vector and matrix operations and can work best with a relatively small number of points

    Adaptive Sampling for Geometric Approximation

    Get PDF
    Geometric approximation of multi-dimensional data sets is an essential algorithmic component for applications in machine learning, computer graphics, and scientific computing. This dissertation promotes an algorithmic sampling methodology for a number of fundamental approximation problems in computational geometry. For each problem, the proposed sampling technique is carefully adapted to the geometry of the input data and the functions to be approximated. In particular, we study proximity queries in spaces of constant dimension and mesh generation in 3D. We start with polytope membership queries, where query points are tested for inclusion in a convex polytope. Trading-off accuracy for efficiency, we tolerate one-sided errors for points within an epsilon-expansion of the polytope. We propose a sampling strategy for the placement of covering ellipsoids sensitive to the local shape of the polytope. The key insight is to realize the samples as Delone sets in the intrinsic Hilbert metric. Using this intrinsic formulation, we considerably simplify state-of-the-art techniques yielding an intuitive and optimal data structure. Next, we study nearest-neighbor queries which retrieve the most similar data point to a given query point. To accommodate more general measures of similarity, we consider non-Euclidean distances including convex distance functions and Bregman divergences. Again, we tolerate multiplicative errors retrieving any point no farther than (1+epsilon) times the distance to the nearest neighbor. We propose a sampling strategy sensitive to the local distribution of points and the gradient of the distance functions. Combined with a careful regularization of the distance minimizers, we obtain a generalized data structure that essentially matches state-of-the-art results specific to the Euclidean distance. Finally, we investigate the generation of Voronoi meshes, where a given domain is decomposed into Voronoi cells as desired for a number of important solvers in computational fluid dynamics. The challenge is to arrange the cells near the boundary to yield an accurate surface approximation without sacrificing quality. We propose a sampling algorithm for the placement of seeds to induce a boundary-conforming Voronoi mesh of the correct topology, with a careful treatment of sharp and non-manifold features. The proposed algorithm achieves significant quality improvements over state-of-the-art polyhedral meshing based on clipped Voronoi cells

    Converting Neuroimaging Big Data to information: Statistical Frameworks for interpretation of Image Driven Biomarkers and Image Driven Disease Subtyping

    Get PDF
    Large scale clinical trials and population based research studies collect huge amounts of neuroimaging data. Machine learning classifiers can potentially use these data to train models that diagnose brain related diseases from individual brain scans. In this dissertation we address two distinct challenges that beset a wider adoption of these tools for diagnostic purposes. The first challenge that besets the neuroimaging based disease classification is the lack of a statistical inference machinery for highlighting brain regions that contribute significantly to the classifier decisions. In this dissertation, we address this challenge by developing an analytic framework for interpreting support vector machine (SVM) models used for neuroimaging based diagnosis of psychiatric disease. To do this we first note that permutation testing using SVM model components provides a reliable inference mechanism for model interpretation. Then we derive our analysis framework by showing that under certain assumptions, the permutation based null distributions associated with SVM model components can be approximated analytically using the data themselves. Inference based on these analytic null distributions is validated on real and simulated data. p-Values computed from our analysis can accurately identify anatomical features that differentiate groups used for classifier training. Since the majority of clinical and research communities are trained in understanding statistical p-values rather than machine learning techniques like the SVM, we hope that this work will lead to a better understanding SVM classifiers and motivate a wider adoption of SVM models for image based diagnosis of psychiatric disease. A second deficiency of learning based neuroimaging diagnostics is that they implicitly assume that, `a single homogeneous pattern of brain changes drives population wide phenotypic differences\u27. In reality it is more likely that multiple patterns of brain deficits drive the complexities observed in the clinical presentation of most diseases. Understanding this heterogeneity may allow us to build better classifiers for identifying such diseases from individual brain scans. However, analytic tools to explore this heterogeneity are missing. With this in view, we present in this dissertation, a framework for exploring disease heterogeneity using population neuroimaging data. The approach we present first computes difference images by comparing matched cases and controls and then clusters these differences. The cluster centers define a set of deficit patterns that differentiates the two groups. By allowing for more than one pattern of difference between two populations, our framework makes a radical departure from traditional tools used for neuroimaging group analyses. We hope that this leads to a better understanding of the processes that lead to disease and also that it ultimately leads to improved image based disease classifiers

    The bracket geometry of statistics

    Get PDF
    In this thesis we build a geometric theory of Hamiltonian Monte Carlo, with an emphasis on symmetries and its bracket generalisations, construct the canonical geometry of smooth measures and Stein operators, and derive the complete recipe of measure-constraints preserving dynamics and diffusions on arbitrary manifolds. Specifically, we will explain the central role played by mechanics with symmetries to obtain efficient numerical integrators, and provide a general method to construct explicit integrators for HMC on geodesic orbit manifolds via symplectic reduction. Following ideas developed by Maxwell, Volterra, Poincaré, de Rham, Koszul, Dufour, Weinstein, and others, we will then show that any smooth distribution generates considerable geometric content, including ``musical" isomorphisms between multi-vector fields and twisted differential forms, and a boundary operator - the rotationnel, which, in particular, engenders the canonical Stein operator. We then introduce the ``bracket formalism" and its induced mechanics, a generalisation of Poisson mechanics and gradient flows that provides a general mechanism to associate unnormalised probability densities to flows depending on the score pointwise. Most importantly, we will characterise all measure-constraints preserving flows on arbitrary manifolds, showing the intimate relation between measure-preserving Nambu mechanics and closed twisted forms. Our results are canonical. As a special case we obtain the characterisation of measure-preserving bracket mechanical systems and measure-preserving diffusions, thus explaining and extending to manifolds the complete recipe of SGMCMC. We will discuss the geometry of Stein operators and extend the density approach by showing these are simply a reformulation of the exterior derivative on twisted forms satisfying Stokes' theorem. Combining the canonical Stein operator with brackets allows us to naturally recover the Riemannian and diffusion Stein operators as special cases. Finally, we shall introduce the minimum Stein discrepancy estimators, which provide a unifying perspective of parameter inference based on score matching, contrastive divergence, and minimum probability flow.Open Acces

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Segmentation automatique des images de tomographie conique pour la radiothérapie de la prostate

    Get PDF
    The use of CBCT imaging for image-guided radiation therapy (IGRT), and beyond that, image-guided adaptive radiation therapy (IGART), in the context of prostate cancer is challenging due to the poor contrast and high noise in pelvic CBCT images. The principal aim of the thesis is to provide methodological contributions for automatic intra-patient image registration between the planning CT scan and the treatment CBCT scan. The first part of our contributions concerns the development of a CBCT-based prostate setup correction strategy using CT-to-CBCT rigid registration (RR). We established a comparison between different RR algorithms: (a) global RR, (b) bony RR, and (c) bony RR refined by a local RR using the prostate CTV in the CT scan expanded with 1- to-20-mm varying margins. A comprehensive statistical analysis of the quantitative and qualitative results was carried out using the whole dataset composed of 115 daily CBCT scans and 10 planning CT scans from 10 prostate cancer patients. We also defined a novel practical method to automatically estimate rectal distension occurred in the vicinity of the prostate between the CT and the CBCT scans. Using our measure of rectal distension, we evaluated the impact of rectal distension on the quality of local RR and we provided a way to predict registration failure. On this basis, we derived recommendations for clinical practice for the use of automatic RR for prostate localization on CBCT scans. The second part of the thesis provides a methodological development of a new joint segmentation and deformable registration framework. To deal with the poor contrast-to-noise ratio in CBCT images likely to misguide registration, we conceived a new metric (or enery) which included two terms: a global similarity term (the normalized cross correlation (NCC) was used, but any other one could be used instead) and a segmentation term based on a localized adaptation of the piecewise-constant region-based model of Chan-Vese using an evolving contour in the CBCT image. Our principal aim was to improve the accuracy of the registration compared with an ordinary NCC metric. Our registration algorithm is fully automatic and takes as inputs (1) the planning CT image, (2) the daily CBCT image and (3) the binary image associated with the CT image and corresponding to the organ of interest we want to segment in the CBCT image in the course of the registration process.Dans le contexte du traitement du cancer de la prostate, l’utilisation de la tomodensitométrie à faisceau conique (CBCT) pour la radiothérapie guidée par l’image, éventuellement adaptative, présente certaines difficultés en raison du faible contraste et du bruit important dans les images pelviennes. L’objectif principal de cette thèse est d’apporter des contributions méthodologiques pour le recalage automatique entre l’image scanner CT de référence et l’image CBCT acquise le jour du traitement. La première partie de nos contributions concerne le développement d’une stratégie de correction du positionnement du patient à l’aide du recalage rigide (RR) CT/CBCT. Nous avons comparé plusieurs algorithmes entre eux : (a) RR osseux, (b) RR osseux suivi d’un RR local dans une région qui correspond au clinical target volume (CTV) de la prostate dans l’image CT élargie d’une marge allant de 1 à 20 mm. Une analyse statistique complète des résultats quantitatifs et qualitatifs utilisant toute la base de données, composée de 115 images cone beam computed tomography (CBCT) et de 10 images computed tomography (CT) de 10 patients atteints du cancer de la prostate, a été réalisée. Nous avons également défini une nouvelle méthode pratique et automatique pour estimer la distension rectale produite dans le voisinage de la prostate entre l’image CT et l’image CBCT. A l’aide de notre mesure de distension rectale, nous avons évalué l’impact de la distension rectale sur la qualité du RR local et nous avons fourni un moyen de prédire les échecs de recalage. Sur cette base, nous avons élaboré des recommandations concernant l’utilisation du RR automatique pour la localisation de la prostate sur les images CBCT en pratique clinique. La seconde partie de la thèse concerne le développement méthodologique d’une nouvelle méthode combinant le recalage déformable et la segmentation. Pour contourner le problème du faible rapport qualité/bruit dans les images CBCT qui peut induire le processus de recalage en erreur, nous avons imaginé une nouvelle énergie composée de deux termes : un terme de similarité globale (la corrélation croisée normalisée (NCC) a été utilisée, mais tout autre mesure de similarité pourrait être utilisée à la place) et un terme de segmentation qui repose sur une adaptation locale du modèle de l’image homogène par morceaux de Chan-Vese utilisant un contour actif dans l’image CBCT. Notre but principal était d’améliorer la précision du recalage comparé à une énergie constituée de la NCC seule. Notre algorithme de recalage est complètement automatique et accepte comme entrées (1) l’image CT de planification, (2) l’image CBCT du jour et (3) l’image binaire associée à l’image CT et correspondant à l’organe d’intérêt que l’on cherche à segmenter dans l’image CBCT au cours du recalage

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Climate Change Inaction in Canada: Political Subsystems and Policy Outcomes in the Oil & Gas Industry, 1999-2019

    Get PDF
    Despite increasing urgency of the climate crisis, Canada is unlikely to meet its 2030 greenhouse gas emissions reduction target under the Paris Agreement. The expansion of the countrys fossil fuel industry is one of the main causes for Canadas emissions. Consequently, recent studies have adopted a policy network approach to outline the relationship between the federal government and the fossil fuel industry to explain the countrys inaction. However, the relationship between this network and actual policy outcomes remains unclear. Hence, this study determines the extent to which climate and energy policy change applied by the federal and Alberta provincial governments reflect the interests of the fossil fuel industry. The main findings point to the fossil fuel industry having had substantial political influence on climate and energy policy decisions over the last twenty years, although its influence has been increasingly contested over time. However, this network remains influential in Canadian politics

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    2010 GREAT Day Program

    Get PDF
    SUNY Geneseo’s Fourth Annual GREAT Day. This file has a supplement of three additional pages, linked in this record.https://knightscholar.geneseo.edu/program-2007/1004/thumbnail.jp
    corecore