20 research outputs found

    Decomposing and packing polygons / Dania el-Khechen.

    Get PDF
    In this thesis, we study three different problems in the field of computational geometry: the partitioning of a simple polygon into two congruent components, the partitioning of squares and rectangles into equal area components while minimizing the perimeter of the cuts, and the packing of the maximum number of squares in an orthogonal polygon. To solve the first problem, we present three polynomial time algorithms which given a simple polygon P partitions it, if possible, into two congruent and possibly nonsimple components P 1 and P 2 : an O ( n 2 log n ) time algorithm for properly congruent components and an O ( n 3 ) time algorithm for mirror congruent components. In our analysis of the second problem, we experimentally find new bounds on the optimal partitions of squares and rectangles into equal area components. The visualization of the best determined solutions allows us to conjecture some characteristics of a class of optimal solutions. Finally, for the third problem, we present three linear time algorithms for packing the maximum number of unit squares in three subclasses of orthogonal polygons: the staircase polygons, the pyramids and Manhattan skyline polygons. We also study a special case of the problem where the given orthogonal polygon has vertices with integer coordinates and the squares to pack are (2 {604} 2) squares. We model the latter problem with a binary integer program and we develop a system that produces and visualizes optimal solutions. The observation of such solutions aided us in proving some characteristics of a class of optimal solutions

    A curve shortening flow rule for closed embedded plane curves with a prescribed rate of change in enclosed area

    Get PDF
    Motivated by a problem from fluid mechanics, we consider a generalization of the standard curve shortening flow problem for a closed embedded plane curve such that the area enclosed by the curve is forced to decrease at a prescribed rate. Using formal asymptotic and numerical techniques, we derive possible extinction shapes as the curve contracts to a point, dependent on the rate of decreasing area; we find there is a wider class of extinction shapes than for standard curve shortening, for which initially simple closed curves are always asymptotically circular. We also provide numerical evidence that self-intersection is possible for non-convex initial conditions, distinguishing between pinch-off and coalescence of the curve interior

    Mapping polygons to the grid with small Hausdorff and Fréchet distance

    Get PDF
    We show how to represent a simple polygon P by a grid (pixel-based) polygon Q that is simple and whose Hausdorff or Fréchet distance to P is small. For any simple polygon P, a grid polygon exists with constant Hausdorff distance between their boundaries and their interiors. Moreover, we show that with a realistic input assumption we can also realize constant Fréchet distance between the boundaries. We present algorithms accompanying these constructions, heuristics to improve their output while keeping the distance bounds, and experiments to assess the output

    Cutting Polygons into Small Pieces with Chords: Laser-Based Localization

    Get PDF
    Motivated by indoor localization by tripwire lasers, we study the problem of cutting a polygon into small-size pieces, using the chords of the polygon. Several versions are considered, depending on the definition of the "size" of a piece. In particular, we consider the area, the diameter, and the radius of the largest inscribed circle as a measure of the size of a piece. We also consider different objectives, either minimizing the maximum size of a piece for a given number of chords, or minimizing the number of chords that achieve a given size threshold for the pieces. We give hardness results for polygons with holes and approximation algorithms for multiple variants of the problem

    Probing a Set of Trajectories to Maximize Captured Information

    Get PDF
    We study a trajectory analysis problem we call the Trajectory Capture Problem (TCP), in which, for a given input set T of trajectories in the plane, and an integer k? 2, we seek to compute a set of k points ("portals") to maximize the total weight of all subtrajectories of T between pairs of portals. This problem naturally arises in trajectory analysis and summarization. We show that the TCP is NP-hard (even in very special cases) and give some first approximation results. Our main focus is on attacking the TCP with practical algorithm-engineering approaches, including integer linear programming (to solve instances to provable optimality) and local search methods. We study the integrality gap arising from such approaches. We analyze our methods on different classes of data, including benchmark instances that we generate. Our goal is to understand the best performing heuristics, based on both solution time and solution quality. We demonstrate that we are able to compute provably optimal solutions for real-world instances

    Shooting permanent rays among disjoint polygons in the plane

    Full text link
    We present a data structure for ray shooting-and-insertion in the free space among disjoint polygonal obstacles with a total of nn vertices in the plane, where each ray starts at the boundary of some obstacle. The portion of each query ray between the starting point and the first obstacle hit is inserted permanently as a new obstacle. Our data structure uses O(n log n) space and preprocessing time, and it supports m successive ray shooting-and-insertion queries in O(n log2 n + m log m) total time. We present two applications for our data structure: (1) Our data structure supports efficient implementation of auto-partitions in the plane i.e. binary space partitions where each partition is done along the supporting line of an input segment. If n input line segments are fragmented into m pieces by an auto-partition, then it can now be implemented in O(n log2n+m log m) time. This improves the expected runtime of Patersen and Yao's classical randomized auto-partition algorithm for n disjoint line segments to O(n log2 n). (2) If we are given disjoint polygonal obstacles with a total of n vertices in the plane, a permutation of the reflex vertices, and a half-line at each reflex vertex that partitions the reflex angle into two convex angles, then the folklore convex partitioning algorithm draws a ray emanating from each reflex vertex in the prescribed order in the given direction until it hits another obstacle, a previous ray, or infinity. The previously best implementation (with a semi-dynamic ray shooting data structure) requires O(n3/2-e/2) time using O(n1+e) space. Our data structure improves the runtime to O(n log2 n)
    corecore