2,197 research outputs found

    Generating Kernel Aware Polygons

    Full text link
    Problems dealing with the generation of random polygons has important applications for evaluating the performance of algorithms on polygonal domain. We review existing algorithms for generating random polygons. We present an algorithm for generating polygons admitting visibility properties. In particular, we propose an algorithm for generating polygons admitting large size kernels. We also present experimental results on generating such polygons

    Polygon Exploration with Time-Discrete Vision

    Full text link
    With the advent of autonomous robots with two- and three-dimensional scanning capabilities, classical visibility-based exploration methods from computational geometry have gained in practical importance. However, real-life laser scanning of useful accuracy does not allow the robot to scan continuously while in motion; instead, it has to stop each time it surveys its environment. This requirement was studied by Fekete, Klein and Nuechter for the subproblem of looking around a corner, but until now has not been considered in an online setting for whole polygonal regions. We give the first algorithmic results for this important algorithmic problem that combines stationary art gallery-type aspects with watchman-type issues in an online scenario: We demonstrate that even for orthoconvex polygons, a competitive strategy can be achieved only for limited aspect ratio A (the ratio of the maximum and minimum edge length of the polygon), i.e., for a given lower bound on the size of an edge; we give a matching upper bound by providing an O(log A)-competitive strategy for simple rectilinear polygons, using the assumption that each edge of the polygon has to be fully visible from some scan point.Comment: 28 pages, 17 figures, 2 photographs, 3 tables, Latex. Updated some details (title, figures and text) for final journal revision, including explicit assumption of full edge visibilit

    A Contribution to Triangulation Algorithms for Simple Polygons

    Get PDF
    Decomposing simple polygon into simpler components is one of the basic tasks in computational geometry and its applications. The most important simple polygon decomposition is triangulation. The known algorithms for polygon triangulation can be classified into three groups: algorithms based on diagonal inserting, algorithms based on Delaunay triangulation, and the algorithms using Steiner points. The paper briefly explains the most popular algorithms from each group and summarizes the common features of the groups. After that four algorithms based on diagonals insertion are tested: a recursive diagonal inserting algorithm, an ear cutting algorithm, Kong’s Graham scan algorithm, and Seidel’s randomized incremental algorithm. An analysis concerning speed, the quality of the output triangles and the ability to handle holes is done at the end

    Parallel searching on m rays☆☆This research is supported by the DFG-Project “Diskrete Probleme”, No. Ot 64/8-3.

    Get PDF
    AbstractWe investigate parallel searching on m concurrent rays. We assume that a target t is located somewhere on one of the rays; we are given a group of m point robots each of which has to reach t. Furthermore, we assume that the robots have no way of communicating over distance. Given a strategy S we are interested in the competitive ratio defined as the ratio of the time needed by the robots to reach t using S and the time needed to reach t if the location of t is known in advance.If a lower bound on the distance to the target is known, then there is a simple strategy which achieves a competitive ratio of 9—independent of m. We show that 9 is a lower bound on the competitive ratio for two large classes of strategies if mâ©Ÿ2.If the minimum distance to the target is not known in advance, we show a lower bound on the competitive ratio of 1+2(k+1)k+1/kk where k=⌈logm⌉ where log is used to denote the base-2 logarithm. We also give a strategy that obtains this ratio

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal

    Defining and identifying the roles of geographic references within text

    Get PDF

    Learning with Weak Annotations for Text in the Wild Detection and Recognition

    Get PDF
    V tĂ©to prĂĄci pƙedstavujeme metodu vyuĆŸĂ­vajĂ­cĂ­ slabě anotovanĂ© obrĂĄzky pro vylepĆĄenĂ­ systĂ©mĆŻ pro extrakci textu. SlabĂĄ antoace spočívĂĄ v seznamu textĆŻ, kterĂ© se v danĂ©m obrĂĄzku mohou vyskytovat, ale nevĂ­me kde. Metoda pouĆŸĂ­vĂĄ libovolnĂœ existujĂ­cĂ­ systĂ©m pro rozpoznĂĄvĂĄnĂ­ textu k zĂ­skĂĄnĂ­ oblastĂ­, kde se pravděpodobně vyskytuje text, spolu s ne nutně sprĂĄvnĂœm pƙepisem. VĂœsledkem procesu zahrnujĂ­cĂ­ho pĂĄrovĂĄnĂ­ nepƙesnĂœch pƙepisĆŻ se slabĂœmi anotacemi a prohledĂĄvĂĄnĂ­ okolĂ­ vedenĂ© Levenshtein vzdĂĄlenostĂ­ jsou skoro bezchybně lokalizovanĂ© texty, se kterĂœmi dĂĄle zachĂĄzĂ­me jako s pseudo-anotacemi vyuĆŸĂ­vanĂœmi k učenĂ­. AplikovĂĄnĂ­ metody na dva slabě anotovanĂ© datasety a doučenĂ­ pouĆŸitĂ©ho systĂ©mu pomocĂ­ zĂ­skanĂœch pseudo-anotacĂ­ ukazuje, ĆŸe nĂĄmi navrĆŸenĂœ proces konzistentně zlepĆĄuje pƙesnost rozpoznĂĄvĂĄnĂ­ na rĆŻznĂœch datasetech (jinĂœch domĂ©nĂĄch) bÄ›ĆŸně vyuĆŸĂ­vanĂœch k testovĂĄnĂ­ a velmi vĂœrazně zvyĆĄuje pƙesnost na stejnĂ©m datasetu. Metodu lze pouĆŸĂ­t iterativně.In this work, we present a method for exploiting weakly annotated images to improve text extraction pipelines. The weak annotation of an image is a list of texts that are likely to appear in the image without any information about the location. An arbitrary existing end-to-end text recognition system is used to obtain text region proposals and their, possibly erroneous, transcriptions. A process that includes imprecise transcription to annotation matching and edit distance guided neighbourhood search produces nearly error-free, localised instances of scene text, which we treat as ``pseudo ground truth'' used for training. We apply the method to two weakly-annotated datasets and use the obtained pseudo ground truth to re-train the end-to-end system. The process consistently improves the accuracy of a state of the art recognition model across different benchmark datasets (image domains) as well as providing a significant performance boost on the same dataset, further improving when applied iteratively
    • 

    corecore