9,957 research outputs found

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal

    Orthogonal Range Reporting and Rectangle Stabbing for Fat Rectangles

    Full text link
    In this paper we study two geometric data structure problems in the special case when input objects or queries are fat rectangles. We show that in this case a significant improvement compared to the general case can be achieved. We describe data structures that answer two- and three-dimensional orthogonal range reporting queries in the case when the query range is a \emph{fat} rectangle. Our two-dimensional data structure uses O(n)O(n) words and supports queries in O(loglogU+k)O(\log\log U +k) time, where nn is the number of points in the data structure, UU is the size of the universe and kk is the number of points in the query range. Our three-dimensional data structure needs O(nlogεU)O(n\log^{\varepsilon}U) words of space and answers queries in O(loglogU+k)O(\log \log U + k) time. We also consider the rectangle stabbing problem on a set of three-dimensional fat rectangles. Our data structure uses O(n)O(n) space and answers stabbing queries in O(logUloglogU+k)O(\log U\log\log U +k) time.Comment: extended version of a WADS'19 pape

    Exploring Fog of War Concepts in Wargame Scenarios

    Get PDF
    This thesis explores fog of war concepts through three submitted journal articles. The Department of Defense and U.S. Air Force are attempting to analyze war scenarios to aid the decision-making process; fog modeling improves realism in these wargame scenarios. The first article Navigating an Enemy Contested Area with a Parallel Search Algorithm [1] investigates a parallel algorithm\u27s speedup, compared to the sequential implementation, with varying map configurations in a tile-based wargame. The parallel speedup tends to exceed 50 but in certain situations. The sequential algorithm outperforms it depending on the configuration of enemy location and amount on the map. The second article Modeling Fog of War Effects in AFSIM [2] introduces the FAT for the AFSIM to introduce and manipulate fog in wargame scenarios. FAT integrates into AFSIM version 2.7.0 and scenario results verify the tool\u27s fog effects for positioning error, hits, and probability affect the success rate. The third article Applying Fog Analysis Tool to AFSIM Multi-Domain CLASS scenarios [3] furthers the verification of FAT to introduce fog across all war fighting domains using a set of CLASS scenarios. The success rate trends with fog impact for each domain scenario support FAT\u27s effectiveness in disrupting the decision-making process for multi-domain operations. The three articles demonstrate fog can affect search, tasking, and decision-making processes for various types of wargame scenarios. The capabilities introduced in this thesis support wargame analysts to improve decision-making in AFSIM military scenarios

    Optimality program in segment and string graphs

    Full text link
    Planar graphs are known to allow subexponential algorithms running in time 2O(n)2^{O(\sqrt n)} or 2O(nlogn)2^{O(\sqrt n \log n)} for most of the paradigmatic problems, while the brute-force time 2Θ(n)2^{\Theta(n)} is very likely to be asymptotically best on general graphs. Intrigued by an algorithm packing curves in 2O(n2/3logn)2^{O(n^{2/3}\log n)} by Fox and Pach [SODA'11], we investigate which problems have subexponential algorithms on the intersection graphs of curves (string graphs) or segments (segment intersection graphs) and which problems have no such algorithms under the ETH (Exponential Time Hypothesis). Among our results, we show that, quite surprisingly, 3-Coloring can also be solved in time 2O(n2/3logO(1)n)2^{O(n^{2/3}\log^{O(1)}n)} on string graphs while an algorithm running in time 2o(n)2^{o(n)} for 4-Coloring even on axis-parallel segments (of unbounded length) would disprove the ETH. For 4-Coloring of unit segments, we show a weaker ETH lower bound of 2o(n2/3)2^{o(n^{2/3})} which exploits the celebrated Erd\H{o}s-Szekeres theorem. The subexponential running time also carries over to Min Feedback Vertex Set but not to Min Dominating Set and Min Independent Dominating Set.Comment: 19 pages, 15 figure

    Report on "Geometry and representation theory of tensors for computer science, statistics and other areas."

    Full text link
    This is a technical report on the proceedings of the workshop held July 21 to July 25, 2008 at the American Institute of Mathematics, Palo Alto, California, organized by Joseph Landsberg, Lek-Heng Lim, Jason Morton, and Jerzy Weyman. We include a list of open problems coming from applications in 4 different areas: signal processing, the Mulmuley-Sohoni approach to P vs. NP, matchgates and holographic algorithms, and entanglement and quantum information theory. We emphasize the interactions between geometry and representation theory and these applied areas

    Compact Routing on Internet-Like Graphs

    Full text link
    The Thorup-Zwick (TZ) routing scheme is the first generic stretch-3 routing scheme delivering a nearly optimal local memory upper bound. Using both direct analysis and simulation, we calculate the stretch distribution of this routing scheme on random graphs with power-law node degree distributions, PkkγP_k \sim k^{-\gamma}. We find that the average stretch is very low and virtually independent of γ\gamma. In particular, for the Internet interdomain graph, γ2.1\gamma \sim 2.1, the average stretch is around 1.1, with up to 70% of paths being shortest. As the network grows, the average stretch slowly decreases. The routing table is very small, too. It is well below its upper bounds, and its size is around 50 records for 10410^4-node networks. Furthermore, we find that both the average shortest path length (i.e. distance) dˉ\bar{d} and width of the distance distribution σ\sigma observed in the real Internet inter-AS graph have values that are very close to the minimums of the average stretch in the dˉ\bar{d}- and σ\sigma-directions. This leads us to the discovery of a unique critical quasi-stationary point of the average TZ stretch as a function of dˉ\bar{d} and σ\sigma. The Internet distance distribution is located in a close neighborhood of this point. This observation suggests the analytical structure of the average stretch function may be an indirect indicator of some hidden optimization criteria influencing the Internet's interdomain topology evolution.Comment: 29 pages, 16 figure
    corecore