21 research outputs found

    Thinning-free Polygonal Approximation of Thick Digital Curves Using Cellular Envelope

    Get PDF
    Since the inception of successful rasterization of curves and objects in the digital space, several algorithms have been proposed for approximating a given digital curve. All these algorithms, however, resort to thinning as preprocessing before approximating a digital curve with changing thickness. Described in this paper is a novel thinning-free algorithm for polygonal approximation of an arbitrarily thick digital curve, using the concept of "cellular envelope", which is newly introduced in this paper. The cellular envelope, defined as the smallest set of cells containing the given curve, and hence bounded by two tightest (inner and outer) isothetic polygons, is constructed using a combinatorial technique. This envelope, in turn, is analyzed to determine a polygonal approximation of the curve as a sequence of cells using certain attributes of digital straightness. Since a real-world curve=curve-shaped object with varying thickness, unexpected disconnectedness, noisy information, etc., is unsuitable for the existing algorithms on polygonal approximation, the curve is encapsulated by the cellular envelope to enable the polygonal approximation. Owing to the implicit Euclidean-free metrics and combinatorial properties prevailing in the cellular plane, implementation of the proposed algorithm involves primitive integer operations only, leading to fast execution of the algorithm. Experimental results that include output polygons for different values of the approximation parameter corresponding to several real-world digital curves, a couple of measures on the quality of approximation, comparative results related with two other well-referred algorithms, and CPU times, have been presented to demonstrate the elegance and efficacy of the proposed algorithm

    Identification of Change in a Dynamic Dot Pattern and its use in the Maintenance of Footprints

    Get PDF
    Examples of spatio-temporal data that can be represented as sets of points (called dot patterns) are pervasive in many applications, for example when tracking herds of migrating animals, ships in busy shipping channels and crowds of people in everyday life. The use of this type of data extends beyond the standard remit of Geographic Information Science (GISc), as classification and optimisation problems can often be visualised in the same manner. A common task within these fields is the assignment of a region (called a footprint) that is representative of the underlying pattern. The ways in which this footprint can be generated has been the subject of much research with many algorithms having been produced. Much of this research has focused on the dot patterns and footprints as static entities, however for many of the applications the data is prone to change. This thesis proposes that the footprint need not necessarily be updated each time the dot pattern changes; that the footprint can remain an appropriate representation of the pattern if the amount of change is slight. To ascertain the appropriate times at which to update the footprint, and when to leave it as it is, this thesis introduces the concept of change identifiers as simple measures of change between two dot patterns. Underlying the change identifiers is an in-depth examination of the data inherent in the dot pattern and the creation of descriptors that represent this data. The experimentation performed by this thesis shows that change identifiers are able to distinguish between different types of change across dot patterns from different sources. In doing so the change identifiers reduce the number of updates of the footprint while maintaining a measurably good representation of the dot pattern

    Contributions to Directed Algebraic Topology:with inspirations from concurrency theory

    Get PDF

    Voronoi diagrams in the max-norm: algorithms, implementation, and applications

    Get PDF
    Voronoi diagrams and their numerous variants are well-established objects in computational geometry. They have proven to be extremely useful to tackle geometric problems in various domains such as VLSI CAD, Computer Graphics, Pattern Recognition, Information Retrieval, etc. In this dissertation, we study generalized Voronoi diagram of line segments as motivated by applications in VLSI Computer Aided Design. Our work has three directions: algorithms, implementation, and applications of the line-segment Voronoi diagrams. Our results are as follows: (1) Algorithms for the farthest Voronoi diagram of line segments in the Lp metric, 1 ≤ p ≤ ∞. Our main interest is the L2 (Euclidean) and the L∞ metric. We first introduce the farthest line-segment hull and its Gaussian map to characterize the regions of the farthest line-segment Voronoi diagram at infinity. We then adapt well-known techniques for the construction of a convex hull to compute the farthest line-segment hull, and therefore, the farthest segment Voronoi diagram. Our approach unifies techniques to compute farthest Voronoi diagrams for points and line segments. (2) The implementation of the L∞ Voronoi diagram of line segments in the Computational Geometry Algorithms Library (CGAL). Our software (approximately 17K lines of C++ code) is built on top of the existing CGAL package on the L2 (Euclidean) Voronoi diagram of line segments. It is accepted and integrated in the upcoming version of the library CGAL-4.7 and will be released in september 2015. We performed the implementation in the L∞ metric because we target applications in VLSI design, where shapes are predominantly rectilinear, and the L∞ segment Voronoi diagram is computationally simpler. (3) The application of our Voronoi software to tackle proximity-related problems in VLSI pattern analysis. In particular, we use the Voronoi diagram to identify critical locations in patterns of VLSI layout, which can be faulty during the printing process of a VLSI chip. We present experiments involving layout pieces that were provided by IBM Research, Zurich. Our Voronoi-based method was able to find all problematic locations in the provided layout pieces, very fast, and without any manual intervention

    Collection of abstracts of the 24th European Workshop on Computational Geometry

    Get PDF
    International audienceThe 24th European Workshop on Computational Geomety (EuroCG'08) was held at INRIA Nancy - Grand Est & LORIA on March 18-20, 2008. The present collection of abstracts contains the 63 scientific contributions as well as three invited talks presented at the workshop

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal
    corecore