1,217 research outputs found

    An efficient output-sensitive hidden surface removal algorithm and its parallelization

    Get PDF
    In this paper we present an algorithm for hidden surface removal for a class of polyhedral surfaces which have a property that they can be ordered relatively quickly like the terrain maps. A distinguishing feature of this algorithm is that its running time is sensitive to the actual size of the visible image rather than the total number of intersections in the image plane which can be much larger than the visible image. The time complexity of this algorithm is O((k +nflognloglogn) where n and k are respectively the input and the output sizes. Thus, in a significant number of situations this will be faster than the worst case optimal algorithms which have running time Ω(n 2) irrespective of the output size (where as the output size k is O(n 2) only in the worst case). We also present a parallel algorithm based on a similar approach which runs in time O(log4(n+k)) using O((n + k)/Iog(n+k)) processors in a CREW PRAM model. All our bounds arc obtained using ammortized analysis

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal

    Terrain prickliness: theoretical grounds for high complexity viewsheds

    Get PDF
    An important task when working with terrain models is computing viewsheds: the parts of the terrain visible from a given viewpoint. When the terrain is modeled as a polyhedral terrain, the viewshed is composed of the union of all the triangle parts that are visible from the viewpoint. The complexity of a viewshed can vary significantly, from constant to quadratic in the number of terrain vertices, depending on the terrain topography and the viewpoint position. In this work we study a new topographic attribute, the prickliness, that measures the number of local maxima in a terrain from all possible perspectives. We show that the prickliness effectively captures the potential of 2.5D terrains to have high complexity viewsheds, and we present near-optimal algorithms to compute the prickliness of 1.5D and 2.5D terrains. We also report on some experiments relating the prickliness of real word 2.5D terrains to the size of the terrains and to their viewshed complexity.Peer ReviewedPostprint (author's final draft

    Translation queries for sets of polygons

    Get PDF

    Hierarchical occlusion culling for arbitrarily-meshed height fields

    Get PDF
    Many graphics applications today have need for high-speed 3-D visualization of height fields. Most of these applications deal with the display of digital terrain models characterized by a simple, but vast, non-overlapping mesh of triangles. A great deal of research has been done to find methods of optimizing such systems. The goal of this work is to establish an algorithm to efficiently preprocess a hierarchical height field model that enables the real-time culling of occluded geometry while still allowing for classic terrain-rendering frameworks. By exploiting the planar-monotone characteristics of height fields, it is possible to create a unique and efficient occlusion culling method that is optimized for terrain rendering and similar applications. Previous work has shown that culling is possible with certain regularly-gridded height field models, but not until now has a system been shown to work with all height fields, regardless of how their meshes are constructed. By freeing the system of meshing restrictions, it is possible to incorporate a number of broader height field algorithms with widely-used applications such as flight simulators, GIS systems, and computer games

    Sensor-Based Adaptive Control and Optimization of Lower-Limb Prosthesis.

    Get PDF
    Recent developments in prosthetics have enabled the development of powered prosthetic ankles (PPA). The advent of such technologies drastically improved impaired gait by increasing balance and reducing metabolic energy consumption by providing net positive power. However, control challenges limit performance and feasibility of today’s devices. With addition of sensors and motors, PPA systems should continuously make control decisions and adapt the system by manipulating control parameters of the prostheses. There are multiple challenges in optimization and control of PPAs. A prominent challenge is the objective setup of the system and calibration parameters to fit each subject. Another is whether it is possible to detect changes in intention and terrain before prosthetic use and how the system should react and adapt to it. In the first part of this study, a model for energy expenditure was proposed using electromyogram (EMG) signals from the residual lower-limbs PPA users. The proposed model was optimized to minimize energy expenditure. Optimization was performed using a modified Nelder-Mead approach with a Latin Hypercube sampling. Results of the proposed method were compared to expert values and it was shown to be a feasible alternative for tuning in a shorter time. In the second part of the study, the control challenges regarding lack of adaptivity for PPAs was investigated. The current PPA system used is enhanced with impedance-controlled parameters that allow the system to provide different assistance. However, current systems are set to a fixed value and fail to acknowledge various terrain and intentions throughout the day. In this study, a pseudo-real-time adaptive control system was proposed to predict the changes in the gait and provide a smoother gait. The proposed control system used physiological, kinetic, and kinematic data and fused them to predict the change. The prediction was done using machine learning-based methods. Results of the study showed an accuracy of up to 89.7 percent for prediction of change for four different cases

    Feature-rich distance-based terrain synthesis

    Get PDF
    This thesis describes a novel terrain synthesis method based on distances in a weighted graph. The method begins with a regular lattice with arbitrary edge weights; heights are determined by path cost from a set of generator nodes. The shapes of individual terrain features, such as mountains, hills, and craters, are specified by a monotonically decreasing profile describing the cross-sectional shape of a feature, while the locations of features in the terrain are specified by placing the generators. Pathing places ridges whose initial location have a dendritic shape. The method is robust and easy to control, making it possible to create pareidolia effects. It can produce a wide range of realistic synthetic terrains such as mountain ranges, craters, faults, cinder cones, and hills. The algorithm incorporates random graph edge weights, permits the inclusion of multiple topography profiles, and allows precise control over placement of terrain features and their heights. These properties all allow the artist to create highly heterogeneous terrains that compare quite favorably to existing methods
    • …
    corecore