61 research outputs found

    Intersection Searching Amid Tetrahedra in 4-Space and Efficient Continuous Collision Detection

    Get PDF

    On Geometric Range Searching, Approximate Counting and Depth Problems

    Get PDF
    In this thesis we deal with problems connected to range searching, which is one of the central areas of computational geometry. The dominant problems in this area are halfspace range searching, simplex range searching and orthogonal range searching and research into these problems has spanned decades. For many range searching problems, the best possible data structures cannot offer fast (i.e., polylogarithmic) query times if we limit ourselves to near linear storage. Even worse, it is conjectured (and proved in some cases) that only very small improvements to these might be possible. This inefficiency has encouraged many researchers to seek alternatives through approximations. In this thesis we continue this line of research and focus on relative approximation of range counting problems. One important problem where it is possible to achieve significant speedup through approximation is halfspace range counting in 3D. Here we continue the previous research done and obtain the first optimal data structure for approximate halfspace range counting in 3D. Our data structure has the slight advantage of being Las Vegas (the result is always correct) in contrast to the previous methods that were Monte Carlo (the correctness holds with high probability). Another series of problems where approximation can provide us with substantial speedup comes from robust statistics. We recognize three problems here: approximate Tukey depth, regression depth and simplicial depth queries. In 2D, we obtain an optimal data structure capable of approximating the regression depth of a query hyperplane. We also offer a linear space data structure which can answer approximate Tukey depth queries efficiently in 3D. These data structures are obtained by applying our ideas for the approximate halfspace counting problem. Approximating the simplicial depth turns out to be much more difficult, however. Computing the simplicial depth of a given point is more computationally challenging than most other definitions of data depth. In 2D we obtain the first data structure which uses near linear space and can answer approximate simplicial depth queries in polylogarithmic time. As applications of this result, we provide two non-trivial methods to approximate the simplicial depth of a given point in higher dimension. Along the way, we establish a tight combinatorial relationship between the Tukey depth of any given point and its simplicial depth. Another problem investigated in this thesis is the dominance reporting problem, an important special case of orthogonal range reporting. In three dimensions, we solve this problem in the pointer machine model and the external memory model by offering the first optimal data structures in these models of computation. Also, in the RAM model and for points from an integer grid we reduce the space complexity of the fastest known data structure to optimal. Using known techniques in the literature, we can use our results to obtain solutions for the orthogonal range searching problem as well. The query complexity offered by our orthogonal range reporting data structures match the most efficient query complexities known in the literature but our space bounds are lower than the previous methods in the external memory model and RAM model where the input is a subset of an integer grid. The results also yield improved orthogonal range searching in higher dimensions (which shows the significance of the dominance reporting problem). Intersection searching is a generalization of range searching where we deal with more complicated geometric objects instead of points. We investigate the rectilinear disjoint polygon counting problem which is a specialized intersection counting problem. We provide a linear-size data structure capable of counting the number of disjoint rectilinear polygons intersecting any rectilinear polygon of constant size. The query time (as well as some other properties of our data structure) resembles the classical simplex range searching data structures

    Computational and Theoretical Issues of Multiparameter Persistent Homology for Data Analysis

    Get PDF
    The basic goal of topological data analysis is to apply topology-based descriptors to understand and describe the shape of data. In this context, homology is one of the most relevant topological descriptors, well-appreciated for its discrete nature, computability and dimension independence. A further development is provided by persistent homology, which allows to track homological features along a oneparameter increasing sequence of spaces. Multiparameter persistent homology, also called multipersistent homology, is an extension of the theory of persistent homology motivated by the need of analyzing data naturally described by several parameters, such as vector-valued functions. Multipersistent homology presents several issues in terms of feasibility of computations over real-sized data and theoretical challenges in the evaluation of possible descriptors. The focus of this thesis is in the interplay between persistent homology theory and discrete Morse Theory. Discrete Morse theory provides methods for reducing the computational cost of homology and persistent homology by considering the discrete Morse complex generated by the discrete Morse gradient in place of the original complex. The work of this thesis addresses the problem of computing multipersistent homology, to make such tool usable in real application domains. This requires both computational optimizations towards the applications to real-world data, and theoretical insights for finding and interpreting suitable descriptors. Our computational contribution consists in proposing a new Morse-inspired and fully discrete preprocessing algorithm. We show the feasibility of our preprocessing over real datasets, and evaluate the impact of the proposed algorithm as a preprocessing for computing multipersistent homology. A theoretical contribution of this thesis consists in proposing a new notion of optimality for such a preprocessing in the multiparameter context. We show that the proposed notion generalizes an already known optimality notion from the one-parameter case. Under this definition, we show that the algorithm we propose as a preprocessing is optimal in low dimensional domains. In the last part of the thesis, we consider preliminary applications of the proposed algorithm in the context of topology-based multivariate visualization by tracking critical features generated by a discrete gradient field compatible with the multiple scalar fields under study. We discuss (dis)similarities of such critical features with the state-of-the-art techniques in topology-based multivariate data visualization

    Algorithms and hardness results for geometric problems on stochastic datasets

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2019. Major: Computer Science. Advisor: Ravi Janardan. 1 computer file (PDF); viii, 121 pages.Traditionally, geometric problems are studied on datasets in which each data object exists with probability 1 at its location in the underlying space. However, in many scenarios, there may be some uncertainty associated with the existence or the locations of the data points. Such uncertain datasets, called \textit{stochastic datasets}, are often more realistic, as they are more expressive and can model the real data more precisely. For this reason, geometric problems on stochastic datasets have received significant attention in recent years. This thesis studies three sets of geometric problems on stochastic datasets equipped with existential uncertainty. The first set of problems addresses the linear separability of a bichromatic stochastic dataset. Specifically, these problems are concerned with how to compute the probability that a realization of a bichromatic stochastic dataset is linearly separable as well as how to compute the expected separation-margin of such a realization. The second set of problems deals with the stochastic convex hull, i.e., the convex hull of a stochastic dataset. This includes computing the expected measures of a stochastic convex hull, such as the expected diameter, width, and combinatorial complexity. The third set of problems considers the dominance relation in a colored stochastic dataset. These problems involve computing the probability that a realization of a colored stochastic dataset does not contain any dominance pair consisting of two different-colored points. New algorithmic and hardness results are provided for the three sets of problems

    High-dimensional polytopes defined by oracles: algorithms, computations and applications

    Get PDF
    Η επεξεργασία και ανάλυση γεωμετρικών δεδομένων σε υψηλές διαστάσεις διαδραματίζει ένα θεμελιώδη ρόλο σε διάφορους κλάδους της επιστήμης και της μηχανικής. Τις τελευταίες δεκαετίες έχουν αναπτυχθεί πολλοί επιτυχημένοι γεωμετρικοί αλγόριθμοι σε 2 και 3 διαστάσεις. Ωστόσο, στις περισσότερες περιπτώσεις, οι επιδόσεις τους σε υψηλότερες διαστάσεις δεν είναι ικανοποιητικές. Αυτή η συμπεριφορά είναι ευρέως γνωστή ως κατάρα των μεγάλων διαστάσεων (curse of dimensionality). Δυο πλαίσια λύσης που έχουν υιοθετηθεί για να ξεπεραστεί αυτή η δυσκολία είναι η εκμετάλλευση της ειδικής δομής των δεδομένων, όπως σε περιπτώσεις αραιών (sparse) δεδομένων ή στην περίπτωση που τα δεδομένα βρίσκονται σε χώρο χαμηλότερης διάστασης, και ο σχεδιασμός προσεγγιστικών αλγορίθμων. Στη διατριβή αυτή μελετάμε προβλήματα μέσα σε αυτά τα πλαίσια. Το κύριο ερευνητικό πεδίο της παρούσας εργασίας είναι η διακριτή και υπολογιστικής γεωμετρία και οι σχέσεις της με τους κλάδους της επιστήμης των υπολογιστών και τα εφαρμοσμένα μαθηματικά, όπως είναι η θεωρία πολυτόπων, οι υλοποιήσεις αλγορίθμων, οι πιθανοθεωρητικοί γεωμετρικοί αλγόριθμοι, η υπολογιστική αλγεβρική γεωμετρία και η βελτιστοποίηση. Τα θεμελιώδη γεωμετρικά αντικείμενα της μελέτης μας είναι τα πολύτοπα, και οι βασικές τους ιδιότητες είναι η κυρτότητα και ότι ορίζονται από ένα μαντείο (oracle) σε ένα χώρο υψηλής διάστασης. Η επεξεργασία και ανάλυση γεωμετρικών δεδομένων σε υψηλές διαστάσεις διαδραματίζει ένα θεμελιώδη ρόλο σε διάφορους κλάδους της επιστήμης και της μηχανικής. Τις τελευταίες δεκαετίες έχουν αναπτυχθεί πολλοί επιτυχημένοι γεωμετρικοί αλγόριθμοι σε 2 και 3 διαστάσεις. Ωστόσο, στις περισσότερες περιπτώσεις, οι επιδόσεις τους σε υψηλότερες διαστάσεις δεν είναι ικανοποιητικές. Δυο πλαίσια λύσης που έχουν υιοθετηθεί για να ξεπεραστεί αυτή η δυσκολία είναι η εκμετάλλευση της ειδικής δομής των δεδομένων, όπως σε περιπτώσεις αραιών (sparse) δεδομένων ή στην περίπτωση που τα δεδομένα βρίσκονται σε χώρο χαμηλότερης διάστασης, και ο σχεδιασμός προσεγγιστικών αλγορίθμων. Το κύριο ερευνητικό πεδίο της παρούσας εργασίας είναι η διακριτή και υπολογιστικής γεωμετρία και οι σχέσεις της με τους κλάδους της επιστήμης των υπολογιστών και τα εφαρμοσμένα μαθηματικά. Η συμβολή αυτής της διατριβής είναι τριπλή. Πρώτον, στο σχεδιασμό και την ανάλυση των γεωμετρικών αλγορίθμων για προβλήματα σε μεγάλες διαστάσεις. Δεύτερον, θεωρητικά αποτελέσματα σχετικά με το συνδυαστικό χαρακτηρισμό βασικών οικογενειών πολυτόπων. Τρίτον, η εφαρμογή και πειραματική ανάλυση των προτεινόμενων αλγορίθμων και μεθόδων. Η ανάπτυξη λογισμικού ανοιχτού κώδικα, που είναι διαθέσιμο στο κοινό και βασίζεται και επεκτείνει διαδεδομένες γεωμετρικές και αλγεβρικές βιβλιοθήκες λογισμικού, όπως η CGAL και το polymake.The processing and analysis of high dimensional geometric data plays a fundamental role in disciplines of science and engineering. The last decades many successful geometric algorithms has been developed in 2 and 3 dimensions. However, in most cases their performance in higher dimensions is poor. This behavior is commonly called the curse of dimensionality. A solution framework adopted for the healing of the curse of dimensionality is the exploitation of the special structure of the data, such as sparsity or low intrinsic dimension and the design of approximation algorithms. The main research area of this thesis is discrete and computational geometry and its connections to branches of computer science and applied mathematics. The contribution of this thesis is threefold. First, the design and analysis of geometric algorithms for problems concerning high-dimensional, convex polytopes, such as convex hull and volume computation and their applications to computational algebraic geometry and optimization. Second, the establishment of combinatorial characterization results for essential polytope families. Third, the implementation and experimental analysis of the proposed algorithms and methods. The developed software is opensource, publicly available and builds on and extends state-of-the-art geometric and algebraic software libraries such as CGAL and polymake

    Computational Geometric and Algebraic Topology

    Get PDF
    Computational topology is a young, emerging field of mathematics that seeks out practical algorithmic methods for solving complex and fundamental problems in geometry and topology. It draws on a wide variety of techniques from across pure mathematics (including topology, differential geometry, combinatorics, algebra, and discrete geometry), as well as applied mathematics and theoretical computer science. In turn, solutions to these problems have a wide-ranging impact: already they have enabled significant progress in the core area of geometric topology, introduced new methods in applied mathematics, and yielded new insights into the role that topology has to play in fundamental problems surrounding computational complexity. At least three significant branches have emerged in computational topology: algorithmic 3-manifold and knot theory, persistent homology and surfaces and graph embeddings. These branches have emerged largely independently. However, it is clear that they have much to offer each other. The goal of this workshop was to be the first significant step to bring these three areas together, to share ideas in depth, and to pool our expertise in approaching some of the major open problems in the field

    Über die Maximal Mediated Set Struktur und die Anwendungen Nichtnegativer Circuit Polynome

    Get PDF
    Certifying the nonnegativity of a polynomial is a significant task both for mathematical and for scientific applications. In general, showing the nonnegativity of a random polynomial is hard. However, for certain classes of polynomials one can find easier conditions that imply their nonnegativity. In this work we investigate both the theoretic and the applied aspects of a special class of polynomials called circuit polynomials. On the theoretical side, we study the relationship of this class of polynomials with another very well studied class called sums of squares using the notion of the maximal mediated set (MMS). We show that MMS is a property of an equivalence class, rather than a property of a single circuit polynomial. With this in mind, we generate a large database of MMS using the software Polymake, and present some statistical and computational observations. On the applied side, we address to the problem of multistationarity in the chemical reaction networks theory by employing a symbolic nonnegativity certification technique via circuit polynomials. The existence of multiple stationary states for a given reaction network with a given starting point is important, as this is closely related to cellular communication in the context of biochemical reaction networks. The existence of multistationarity can be decided by studying the signs of a relevant polynomial whose coefficients are parameterized by the reaction rates. As a case study, we consider the (de)phosphorylation cycle, and use the theory of nonnegative circuit polynomials in order to find a symbolic nonnegativity certificates for the aforementioned polynomial. We provide a method that describes a non-empty open region in the parameter space that enables multistationarity for the (de)phosphorylation cycle. Moreover, we provide an explicit description of such an open region for 2 and 3-site cases.Der Nachweis der Nichtnegativität eines Polynoms ist eine wichtige Aufgabe sowohl für mathematische als auch für wissenschaftliche Anwendungen. Im Allgemeinen ist es schwierig, die Nichtnegativität eines Zufallspolynoms zu zeigen. Für bestimmte Klassen von Polynomen kann man jedoch einfachere Bedingungen finden, die ihre Nichtnegativität implizieren. In dieser Arbeit untersuchen wir sowohl die theoretischen als auch die angewandten Aspekte einer speziellen Klasse von Polynomen, die als circuit Polynome bezeichnet werden. Auf der theoretischen Seite untersuchen wir die Beziehung dieser Klasse von Polynomen mit einer anderen sehr gut untersuchten Klasse namens sums of squares unter Verwendung des Begriffs der maximal mediated set (MMS). Wir zeigen, dass MMS eher eine Eigenschaft einer Äquivalenzklasse als eine Eigenschaft eines circuit polynom ist. Vor diesem Hintergrund erstellen wir mit der Polymake-Software eine große MMS-Datenbank und präsentieren einige statistische und rechnerische Beobachtungen. Auf der angewandten Seite adressieren wir das Problem der Multistationarität in der Theorie chemischer Reaktionsnetzwerke durch die Anwendung einer symbolischen Nichtnegativitäts-Zertifizierungstechnik über circuit Polynome. Die Existenz mehrerer stationärer Zustände für ein gegebenes Reaktionsnetzwerk mit einem gegebenen Startpunkt ist wichtig, da dies eng mit der zellulären Kommunikation im Kontext biochemischer Reaktionsnetzwerke zusammenhängt. Die Existenz von Multistationarität kann durch Studium der Vorzeichen eines relevanten Polynoms entschieden werden, dessen Koeffizienten durch die Reaktionsgeschwindigkeiten parametrisiert werden. Betrachten Sie als Fallbeispiel den (De)Phosphorylierungszyklus und verwenden Sie die Theorie der circuit Polynome, um ein symbolisches Nichtnegativitätszertifikat für das obige Polynom zu finden. Darüber hinaus bieten wir eine explizite Beschreibung einer solchen offenen Region für 2- und 3-Site-Fälle

    Computational geometry through the information lens

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 111-117).This thesis revisits classic problems in computational geometry from the modern algorithmic perspective of exploiting the bounded precision of the input. In one dimension, this viewpoint has taken over as the standard model of computation, and has led to a powerful suite of techniques that constitute a mature field of research. In two or more dimensions, we have seen great success in understanding orthogonal problems, which decompose naturally into one dimensional problems. However, problems of a nonorthogonal nature, the core of computational geometry, have remained uncracked for many years despite extensive effort. For example, Willard asked in SODA'92 for a o(nlg n) algorithm for Voronoi diagrams. Despite growing interest in the problem, it was not successfully solved until this thesis. Formally, let w be the number of bits in a computer word, and consider n points with O(w)-bit rational coordinates. This thesis describes: * a data structure for 2-d point location with O(n) space, and 0( ... )query time. * randomized algorithms with running time 9 ... ) for 3-d convex hull, 2-d Voronoi diagram, 2-d line segment intersection, and a variety of related problems. * a data structure for 2-d dynamic convex hull, with O ( ... )query time, and O ( ... ) update time. More generally, this thesis develops a suite of techniques for exploiting bounded precision in geometric problems, hopefully laying the foundations for a rejuvenated research direction.by Mihai Pǎtraşcu.S.M
    corecore