4 research outputs found

    Random Convex Hulls and Extreme Value Statistics

    Full text link
    In this paper we study the statistical properties of convex hulls of NN random points in a plane chosen according to a given distribution. The points may be chosen independently or they may be correlated. After a non-exhaustive survey of the somewhat sporadic literature and diverse methods used in the random convex hull problem, we present a unifying approach, based on the notion of support function of a closed curve and the associated Cauchy's formulae, that allows us to compute exactly the mean perimeter and the mean area enclosed by the convex polygon both in case of independent as well as correlated points. Our method demonstrates a beautiful link between the random convex hull problem and the subject of extreme value statistics. As an example of correlated points, we study here in detail the case when the points represent the vertices of nn independent random walks. In the continuum time limit this reduces to nn independent planar Brownian trajectories for which we compute exactly, for all nn, the mean perimeter and the mean area of their global convex hull. Our results have relevant applications in ecology in estimating the home range of a herd of animals. Some of these results were announced recently in a short communication [Phys. Rev. Lett. {\bf 103}, 140602 (2009)].Comment: 61 pages (pedagogical review); invited contribution to the special issue of J. Stat. Phys. celebrating the 50 years of Yeshiba/Rutgers meeting

    On a Simple, Practical, Optimal, Output-Sensitive Randomized Planar Convex Hull Algorithm

    No full text
    In this paper we present a truly practical and provably optimal O(n log h) time outputsensitive algorithm for the planar convex hull problem. The basic algorithm is similar to the algorithm presented in Chan, Snoeyink and Yap[2] where the median-finding step is replaced by an approximate median. We analyze two such schemes and show that for both methods, the algorithm runs in expected O(n log h) time. The expected number of comparisons can be made smaller than 5n log h for the upper-hull. We further show that the probability of deviation from expected running time approaches 0 rapidly with increasing values of n and h for any input. Our experiments suggest that this algorithm is a practical alternative to the worstcase O(n log n) algorithms like Graham's and especially faster for small output-sizes. Our approach bears some resemblance to a recent algorithm of Wenger[13] but our analysis is substantially different. 1 Introduction The planar convex hull problem is perhaps the most stud..

    Toward better computation models for modern machines

    Get PDF
    Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on the ultimate planar convex hull algorithm of Kirkpatrick and Seidel, and it has been referred to as the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.Moderne Computer sind keine Random Access Machines (RAMs), da ihr Speicher hierarchisch ist und sie sowohl mehrere Rechenkerne als auch virtuellen Speicher benutzen. Wir betrachten die Kosten von Adressübersetzungen in virtuellem Speicher und die Schwierigkeiten beim Entwurf nebenläufiger Algorithmen für moderne Mehrkernprozessoren. Am Anfang unserer Arbeit über virtuellen Speicher steht die Beobachtung, dass die Analyse einiger einfacher Algorithmen (zufällige Zugriffe in einem Array, Binärsuche, Heapsort) sowohl im RAM Modell als auch im EM (Modell für externen Speicher) die tatsächlichen asymptotischen Laufzeiten nicht korrekt wiedergibt. Um auch die Kosten der Adressübersetzung mit in die Analyse aufzunehmen, definieren wir das sogenannte VAT Modell (virtual address translation) und benutzen es, um die oben genannten Algorithmen zu analysieren. In diesem Modell stimmen die theoretischen Laufzeiten mit den Messungen aus der Praxis überein. Zudem werden die Kosten von Cache-oblivious Algorithmen im VAT Modell untersucht. Der zweite Teil der Arbeit behandelt eine Fallstudie zur Implementierung eines effizienten Algorithmus zur Berechnung von konvexen Hüllen in 2D auf GPUs (Graphics Processing Units). Der Algorithmus basiert auf dem ultimate planar convex hull algorithm von Kirkpatrick und Seidel und wurde 2012 von Gao et al. in ihrer Veröffentlichung über konvexe Hüllen in 3D als die erste erfolgreiche Implementierung des QuickHull-Algorithmus auf GPUs bezeichnet. Motiviert wird diese Arbeit durch den generellen Glauben der IT-Welt, dass Resultate aus der theoretischen Informatik nicht immer auf Probleme in der Praxis anwendbar sind und dass oft nicht auf die speziellen Anforderungen und Probleme eingegangen wird, die mit einer Implementierung eines Algorithmus einhergehen. Wir zeigen, wie der hohe Grad an Parallelität, der auf GPUs verfügbar ist, für Probleme nutzbar gemacht werden kann, für die eine Zerlegung in unabhängige Teilprobleme nicht offensichtlich ist
    corecore