10 research outputs found

    Toward better computation models for modern machines

    Get PDF
    Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on the ultimate planar convex hull algorithm of Kirkpatrick and Seidel, and it has been referred to as the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.Moderne Computer sind keine Random Access Machines (RAMs), da ihr Speicher hierarchisch ist und sie sowohl mehrere Rechenkerne als auch virtuellen Speicher benutzen. Wir betrachten die Kosten von Adressübersetzungen in virtuellem Speicher und die Schwierigkeiten beim Entwurf nebenläufiger Algorithmen für moderne Mehrkernprozessoren. Am Anfang unserer Arbeit über virtuellen Speicher steht die Beobachtung, dass die Analyse einiger einfacher Algorithmen (zufällige Zugriffe in einem Array, Binärsuche, Heapsort) sowohl im RAM Modell als auch im EM (Modell für externen Speicher) die tatsächlichen asymptotischen Laufzeiten nicht korrekt wiedergibt. Um auch die Kosten der Adressübersetzung mit in die Analyse aufzunehmen, definieren wir das sogenannte VAT Modell (virtual address translation) und benutzen es, um die oben genannten Algorithmen zu analysieren. In diesem Modell stimmen die theoretischen Laufzeiten mit den Messungen aus der Praxis überein. Zudem werden die Kosten von Cache-oblivious Algorithmen im VAT Modell untersucht. Der zweite Teil der Arbeit behandelt eine Fallstudie zur Implementierung eines effizienten Algorithmus zur Berechnung von konvexen Hüllen in 2D auf GPUs (Graphics Processing Units). Der Algorithmus basiert auf dem ultimate planar convex hull algorithm von Kirkpatrick und Seidel und wurde 2012 von Gao et al. in ihrer Veröffentlichung über konvexe Hüllen in 3D als die erste erfolgreiche Implementierung des QuickHull-Algorithmus auf GPUs bezeichnet. Motiviert wird diese Arbeit durch den generellen Glauben der IT-Welt, dass Resultate aus der theoretischen Informatik nicht immer auf Probleme in der Praxis anwendbar sind und dass oft nicht auf die speziellen Anforderungen und Probleme eingegangen wird, die mit einer Implementierung eines Algorithmus einhergehen. Wir zeigen, wie der hohe Grad an Parallelität, der auf GPUs verfügbar ist, für Probleme nutzbar gemacht werden kann, für die eine Zerlegung in unabhängige Teilprobleme nicht offensichtlich ist

    Computing pseudotriangulations via branched coverings

    Full text link
    We describe an efficient algorithm to compute a pseudotriangulation of a finite planar family of pairwise disjoint convex bodies presented by its chirotope. The design of the algorithm relies on a deepening of the theory of visibility complexes and on the extension of that theory to the setting of branched coverings. The problem of computing a pseudotriangulation that contains a given set of bitangent line segments is also examined.Comment: 66 pages, 39 figure

    Three Approaches to Building Time-Windowed Geometric Data Structures

    Get PDF
    Given a set of geometric objects (points or line segments) each associated with a time value, we wish to determine whether a given property is true for a subset of those objects whose time values fall within a query time window. We call such problems time-windowed decision problems. We present algorithms to preprocess for the time-windowed closest pair decision problem in O(n) expected time, for the time-windowed 2D diameter decision problem in O(n log n) time, the time-windowed 2D convex hull area decision problem in O(n α(n) log n) time (where α is the inverse Ackermann function), and the time-windowed 3D diameter decision and orthogonal segment intersection detection problems in O(n polylog n) time. Our first approach is to reduce the closest pair decision problem to 2D dominance range emptiness using grids to compute candidate satisfying pairs. We extend this approach to find the closest pair of points by reducing the problem to 2D dominance range minimum, which we further reduce to 2D point location. Our second approach is to reduce time-windowed decision problems to a generalized range successor problem, which we solve using a novel way to search range trees. Our third approach is to use dynamic data structures directly, taking advantage of a new observation that the total number of combinatorial changes to a planar convex hull is near linear for any FIFO update sequence, in which deletions occur in the same order as insertions

    Energy Efficient Algorithms in Low-Energy Wireless Sensor Networks

    Full text link
    Wireless sensor networks (WSNs) consist of small autonomous processors spatially distributed, typically with the goal of gathering physical data about the environment such as temperature, air pressure, and sound. WSNs have a wide range of applications including military use, health care monitoring, and environmental sensing. Because sensors are typically battery powered, algorithms for sensor network models should not only seek to minimize runtime but also energy utilization. Specifically, to maximize network lifetime, algorithms must minimize the energy usage of the sensors that use the most energy in the network. In extremely dense networks it may be inefficient for sensors to communicate with all neighboring sensors on a consistent basis, especially in mobile wireless sensor networks (MWSNs) where the topology of the network is constantly changing. Sensors conserve energy by going into a low-energy sleep state, and in our algorithms sensors will be asleep for the vast majority of the total runtime. Algorithms under these conditions face additional challenges because of the increased difficulty of coordinating between sensors. Because of the spatial nature of sensor networks, geometry problems are often of particular interest. For example, to detect outliers, data is often compared with the nearest neighboring sensors. In this dissertation we provide algorithmic techniques designed for divide-and-conquer solutions to computational geometry problems. We provide a technique for coordinating divide-and-conquer algorithms in a single-hop setting called breadth first recursion. We use this technique to sort data and to find the convex hull. Although most WSNs are multi-hop networks, locally very dense, expansive networks resemble single-hop networks. Thus we use algorithms for single-hop networks as a building blocks for multi-hop algorithms with α-consolidation algorithms. We then provide α-consolidation algorithms for all points k-nearest neighbors, the coverage boundary, and the Voronoi diagram. We also analyze the WSN problem of propagating data to a high-energy base station. Clustering approaches, such as low-energy adaptive clustering hierarchy (LEACH) and its multi-hop variant (MR-LEACH), are extremely popular for data propagation. The energy balanced protocol (EBP) is a clustering approach like MRLEACH where clusters pass data towards the base station but also, with some probability, send data long distances directly to the base station. We analytically and empirically show that EBP is close to optimal while approaches that do not use long hops like MR-LEACH are only close to optimal if sending messages long distances is prohibitively expensive.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153370/1/timlewis_1.pd

    Computing Volumes and Convex Hulls: Variations and Extensions

    Get PDF
    Geometric techniques are frequently utilized to analyze and reason about multi-dimensional data. When confronted with large quantities of such data, simplifying geometric statistics or summaries are often a necessary first step. In this thesis, we make contributions to two such fundamental concepts of computational geometry: Klee's Measure and Convex Hulls. The former is concerned with computing the total volume occupied by a set of overlapping rectangular boxes in d-dimensional space, while the latter is concerned with identifying extreme vertices in a multi-dimensional set of points. Both problems are frequently used to analyze optimal solutions to multi-objective optimization problems: a variant of Klee's problem called the Hypervolume Indicator gives a quantitative measure for the quality of a discrete Pareto Optimal set, while the Convex Hull represents the subset of solutions that are optimal with respect to at least one linear optimization function.In the first part of the thesis, we investigate several practical and natural variations of Klee's Measure Problem. We develop a specialized algorithm for a specific case of Klee's problem called the “grounded” case, which also solves the Hypervolume Indicator problem faster than any earlier solution for certain dimensions. Next, we extend Klee's problem to an uncertainty setting where the existence of the input boxes are defined probabilistically, and study computing the expectation of the volume. Additionally, we develop efficient algorithms for a discrete version of the problem, where the volume of a box is redefined to be the cardinality of its overlap with a given point set.The second part of the thesis investigates the convex hull problem on uncertain input. To this extent, we examine two probabilistic uncertainty models for point sets. The first model incorporates uncertainty in the existence of the input points. The second model extends the first one by incorporating locational uncertainty. For both models, we study the problem of computing the probability that a given point is contained in the convex hull of the uncertain points. We also consider the problem of finding the most likely convex hull, i.e., the mode of the convex hull random variable

    29th International Symposium on Algorithms and Computation: ISAAC 2018, December 16-19, 2018, Jiaoxi, Yilan, Taiwan

    Get PDF
    corecore