10,208 research outputs found

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal

    On Ray Shooting for Triangles in 3-Space and Related Problems

    Get PDF
    We consider several problems that involve lines in three dimensions, and present improved algorithms for solving them. The problems include (i) ray shooting amid triangles in R3R^3, (ii) reporting intersections between query lines (segments, or rays) and input triangles, as well as approximately counting the number of such intersections, (iii) computing the intersection of two nonconvex polyhedra, (iv) detecting, counting, or reporting intersections in a set of lines in R3R^3, and (v) output-sensitive construction of an arrangement of triangles in three dimensions. Our approach is based on the polynomial partitioning technique. For example, our ray-shooting algorithm processes a set of nn triangles in R3R^3 into a data structure for answering ray shooting queries amid the given triangles, which uses O(n3/2+ε)O(n^{3/2+\varepsilon}) storage and preprocessing, and answers a query in O(n1/2+ε)O(n^{1/2+\varepsilon}) time, for any ε>0\varepsilon>0. This is a significant improvement over known results, obtained more than 25 years ago, in which, with this amount of storage, the query time bound is roughly n5/8n^{5/8}. The algorithms for the other problems have similar performance bounds, with similar improvements over previous results. We also derive a nontrivial improved tradeoff between storage and query time. Using it, we obtain algorithms that answer mm queries on nn objects in max{O(m2/3n5/6+ε+n1+ε),  O(m5/6+εn2/3+m1+ε)} \max \left\{ O(m^{2/3}n^{5/6+\varepsilon} + n^{1+\varepsilon}),\; O(m^{5/6+\varepsilon}n^{2/3} + m^{1+\varepsilon}) \right\} time, for any ε>0\varepsilon>0, again an improvement over the earlier bounds.Comment: 33 pages, 7 figure

    Computational Geometry Column 42

    Get PDF
    A compendium of thirty previously published open problems in computational geometry is presented.Comment: 7 pages; 72 reference

    Vertical ray shooting and computing depth orders of fat objects

    Get PDF
    We present new results for three problems dealing with a set P\mathcal{P} of nn convex constant-complexity fat polyhedra in 3-space. (i) We describe a data structure for vertical ray shooting in P\mathcal{P} that has O(log2n)O(\log^2 n) query time and uses O(nlog2n)O(n\log^2 n) storage. (ii) We give an algorithm to compute in O(nlog3n)O(n\log^3 n) time a depth order on P\mathcal{P} if it exists. (iii) We give an algorithm to verify in O(nlog3n)O(n\log^3 n) time whether a given order on P\mathcal{P} is a valid depth order. All three results improve on previous results

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods
    corecore