63 research outputs found

    System of Terrain Analysis, Energy Estimation and Path Planning for Planetary Exploration by Robot Teams

    Get PDF
    NASA’s long term plans involve a return to manned moon missions, and eventually sending humans to mars. The focus of this project is the use of autonomous mobile robotics to enhance these endeavors. This research details the creation of a system of terrain classification, energy of traversal estimation and low cost path planning for teams of inexpensive and potentially expendable robots. The first stage of this project was the creation of a model which estimates the energy requirements of the traversal of varying terrain types for a six wheel rocker-bogie rover. The wheel/soil interaction model uses Shibly’s modified Bekker equations and incorporates a new simplified rocker-bogie model for estimating wheel loads. In all but a single trial the relative energy requirements for each soil type were correctly predicted by the model. A path planner for complete coverage intended to minimize energy consumption was designed and tested. It accepts as input terrain maps detailing the energy consumption required to move to each adjacent location. Exploration is performed via a cost function which determines the robot’s next move. This system was successfully tested for multiple robots by means of a shared exploration map. At peak efficiency, the energy consumed by our path planner was only 56% that used by the best case back and forth coverage pattern. After performing a sensitivity analysis of Shibly’s equations to determine which soil parameters most affected energy consumption, a neural network terrain classifier was designed and tested. The terrain classifier defines all traversable terrain as one of three soil types and then assigns an assumed set of soil parameters. The classifier performed well over all, but had some difficulty distinguishing large rocks from sand. This work presents a system which successfully classifies terrain imagery into one of three soil types, assesses the energy requirements of terrain traversal for these soil types and plans efficient paths of complete coverage for the imaged area. While there are further efforts that can be made in all areas, the work achieves its stated goals

    An intelligent Geographic Information System for design

    Get PDF
    Recent advances in geographic information systems (GIS) and artificial intelligence (AI) techniques have been summarised, concentrating on the theoretical aspects of their construction and use. Existing projects combining AI and GIS have also been discussed, with attention paid to the interfacing methods used and problems uncovered by the approaches. AI and GIS have been combined in this research to create an intelligent GIS for design. This has been applied to off-shore pipeline route design. The system was tested using data from a real pipeline design project. [Continues.

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Geometric algorithms for geographic information systems

    Get PDF
    A geographic information system (GIS) is a software package for storing geographic data and performing complex operations on the data. Examples are the reporting of all land parcels that will be flooded when a certain river rises above some level, or analyzing the costs, benefits, and risks involved with the development of industrial activities at some place. A substantial part of all activities performed by a GIS involves computing with the geometry of the data, such as location, shape, proximity, and spatial distribution. The amount of data stored in a GIS is usually very large, and it calls for efficient methods to store, manipulate, analyze, and display such amounts of data. This makes the field of GIS an interesting source of problems to work on for computational geometers. In chapters 2-5 of this thesis we give new geometric algorithms to solve four selected GIS problems.These chapters are preceded by an introduction that provides the necessary background, overview, and definitions to appreciate the following chapters. The four problems that we study in chapters 2-5 are the following: Subdivision traversal: we give a new method to traverse planar subdivisions without using mark bits or a stack. Contour trees and seed sets: we give a new algorithm for generating a contour tree for d-dimensional meshes, and use it to determine a seed set of minimum size that can be used for isosurface generation. This is the first algorithm that guarantees a seed set of minimum size. Its running time is quadratic in the input size, which is not fast enough for many practical situations. Therefore, we also give a faster algorithm that gives small (although not minimal) seed sets. Settlement selection: we give a number of new models for the settlement selection problem. When settlements, such as cities, have to be displayed on a map, displaying all of them may clutter the map, depending on the map scale. Choices have to be made which settlements are selected, and which ones are omitted. Compared to existing selection methods, our methods have a number of favorable properties. Facility location: we give the first algorithm for computing the furthest-site Voronoi diagram on a polyhedral terrain, and show that its running time is near-optimal. We use the furthest-site Voronoi diagram to solve the facility location problem: the determination of the point on the terrain that minimizes the maximal distance to a given set of sites on the terrain

    Efficient Point Clustering for Visualization

    Get PDF
    The visualization of large spatial point data sets constitutes a problem with respect to runtime and quality. A visualization of raw data often leads to occlusion and clutter and thus a loss of information. Furthermore, particularly mobile devices have problems in displaying millions of data items. Often, thinning via sampling is not the optimal choice because users want to see distributional patterns, cardinalities and outliers. In particular for visual analytics, an aggregation of this type of data is very valuable for providing an interactive user experience. This thesis defines the problem of visual point clustering that leads to proportional circle maps. It furthermore introduces a set of quality measures that assess different aspects of resulting circle representations. The Circle Merging Quadtree constitutes a novel and efficient method to produce visual point clusterings via aggregation. It is able to outperform comparable methods in terms of runtime and also by evaluating it with the aforementioned quality measures. Moreover, the introduction of a preprocessing step leads to further substantial performance improvements and a guaranteed stability of the Circle Merging Quadtree. This thesis furthermore addresses the incorporation of miscellaneous attributes into the aggregation. It discusses means to provide statistical values for numerical and textual attributes that are suitable for side-views such as plots and data tables. The incorporation of multiple data sets or data sets that contain class attributes poses another problem for aggregation and visualization. This thesis provides methods for extending the Circle Merging Quadtree to output pie chart maps or maps that contain circle packings. For the latter variant, this thesis provides results of a user study that investigates the methods and the introduced quality criteria. In the context of providing methods for interactive data visualization, this thesis finally presents the VAT System, where VAT stands for visualization, analysis and transformation. This system constitutes an exploratory geographical information system that implements principles of visual analytics for working with spatio-temporal data. This thesis details on the user interface concept for facilitating exploratory analysis and provides the results of two user studies that assess the approach

    Automatic near real-time flood detection in high resolution X-band synthetic aperture radar satellite data using context-based classification on irregular graphs

    Get PDF
    This thesis is an outcome of the project “Flood and damage assessment using very high resolution SAR data” (SAR-HQ), which is embedded in the interdisciplinary oriented RIMAX (Risk Management of Extreme Flood Events) programme, funded by the Federal Ministry of Education and Research (BMBF). It comprises the results of three scientific papers on automatic near real-time flood detection in high resolution X-band synthetic aperture radar (SAR) satellite data for operational rapid mapping activities in terms of disaster and crisis-management support. Flood situations seem to become more frequent and destructive in many regions of the world. A rising awareness of the availability of satellite based cartographic information has led to an increase in requests to corresponding mapping services to support civil-protection and relief organizations with disaster-related mapping and analysis activities. Due to the rising number of satellite systems with high revisit frequencies, a strengthened pool of SAR data is available during operational flood mapping activities. This offers the possibility to observe the whole extent of even large-scale flood events and their spatio-temporal evolution, but also calls for computationally efficient and automatic flood detection methods, which should drastically reduce the user input required by an active image interpreter. This thesis provides solutions for the near real-time derivation of detailed flood parameters such as flood extent, flood-related backscatter changes as well as flood classification probabilities from the new generation of high resolution X-band SAR satellite imagery in a completely unsupervised way. These data are, in comparison to images from conventional medium-resolution SAR sensors, characterized by an increased intra-class and decreased inter-class variability due to the reduced mixed pixel phenomenon. This problem is addressed by utilizing multi-contextual models on irregular hierarchical graphs, which consider that semantic image information is less represented in single pixels but in homogeneous image objects and their mutual relation. A hybrid Markov random field (MRF) model is developed, which integrates scale-dependent as well as spatio-temporal contextual information into the classification process by combining hierarchical causal Markov image modeling on automatically generated irregular hierarchical graphs with noncausal Markov modeling related to planar MRFs. This model is initialized in an unsupervised manner by an automatic tile-based thresholding approach, which solves the flood detection problem in large-size SAR data with small a priori class probabilities by statistical parameterization of local bi-modal class-conditional density functions in a time efficient manner. Experiments performed on TerraSAR-X StripMap data of Southwest England and ScanSAR data of north-eastern Namibia during large-scale flooding show the effectiveness of the proposed methods in terms of classification accuracy, computational performance, and transferability. It is further demonstrated that hierarchical causal Markov models such as hierarchical maximum a posteriori (HMAP) and hierarchical marginal posterior mode (HMPM) estimation can be effectively used for modeling the inter-spatial context of X-band SAR data in terms of flood and change detection purposes. Although the HMPM estimator is computationally more demanding than the HMAP estimator, it is found to be more suitable in terms of classification accuracy. Further, it offers the possibility to compute marginal posterior entropy-based confidence maps, which are used for the generation of flood possibility maps that express that the uncertainty in labeling of each image element. The supplementary integration of intra-spatial and, optionally, temporal contextual information into the Markov model results in a reduction of classification errors. It is observed that the application of the hybrid multi-contextual Markov model on irregular graphs is able to enhance classification results in comparison to modeling on regular structures of quadtrees, which is the hierarchical representation of images usually used in MRF-based image analysis. X-band SAR systems are generally not suited for detecting flooding under dense vegetation canopies such as forests due to the low capability of the X-band signal to penetrate into media. Within this thesis a method is proposed for the automatic derivation of flood areas beneath shrubs and grasses from TerraSAR-X data. Furthermore, an approach is developed, which combines high resolution topographic information with multi-scale image segmentation to enhance the mapping accuracy in areas consisting of flooded vegetation and anthropogenic objects as well as to remove non-water look-alike areas

    Large-Scale Spatial Data Management on Modern Parallel and Distributed Platforms

    Full text link
    Rapidly growing volume of spatial data has made it desirable to develop efficient techniques for managing large-scale spatial data. Traditional spatial data management techniques cannot meet requirements of efficiency and scalability for large-scale spatial data processing. In this dissertation, we have developed new data-parallel designs for large-scale spatial data management that can better utilize modern inexpensive commodity parallel and distributed platforms, including multi-core CPUs, many-core GPUs and computer clusters, to achieve both efficiency and scalability. After introducing background on spatial data management and modern parallel and distributed systems, we present our parallel designs for spatial indexing and spatial join query processing on both multi-core CPUs and GPUs for high efficiency as well as their integrations with Big Data systems for better scalability. Experiment results using real world datasets demonstrate the effectiveness and efficiency of the proposed techniques on managing large-scale spatial data

    A query processing system for very large spatial databases using a new map algebra

    Get PDF
    Dans cette thèse nous introduisons une approche de traitement de requêtes pour des bases de donnée spatiales. Nous expliquons aussi les concepts principaux que nous avons défini et développé: une algèbre spatiale et une approche à base de graphe utilisée dans l'optimisateur. L'algèbre spatiale est défini pour exprimer les requêtes et les règles de transformation pendant les différentes étapes de l'optimisation de requêtes. Nous avons essayé de définir l'algèbre la plus complète que possible pour couvrir une grande variété d'application. L'opérateur algébrique reçoit et produit seulement des carte. Les fonctions reçoivent des cartes et produisent des scalaires ou des objets. L'optimisateur reçoit la requête en expression algébrique et produit un QEP (Query Evaluation Plan) efficace dans deux étapes: génération de QEG (Query Evaluation Graph) et génération de QEP. Dans première étape un graphe (QEG) équivalent de l'expression algébrique est produit. Les règles de transformation sont utilisées pour transformer le graphe a un équivalent plus efficace. Dans deuxième étape un QEP est produit de QEG passé de l'étape précédente. Le QEP est un ensemble des opérations primitives consécutives qui produit les résultats finals (la réponse finale de la requête soumise au base de donnée). Nous avons implémenté l'optimisateur, un générateur de requête spatiale aléatoire, et une base de donnée simulée. La base de donnée spatiale simulée est un ensemble de fonctions pour simuler des opérations spatiales primitives. Les requêtes aléatoires sont soumis à l'optimisateur. Les QEPs générées sont soumis au simulateur de base de données spatiale. Les résultats expérimentaux sont utilisés pour discuter les performances et les caractéristiques de l'optimisateur.Abstract: In this thesis we introduce a query processing approach for spatial databases and explain the main concepts we defined and developed: a spatial algebra and a graph based approach used in the optimizer. The spatial algebra was defined to express queries and transformation rules during different steps of the query optimization. To cover a vast variety of potential applications, we tried to define the algebra as complete as possible. The algebra looks at the spatial data as maps of spatial objects. The algebraic operators act on the maps and result in new maps. Aggregate functions can act on maps and objects and produce objects or basic values (characters, numbers, etc.). The optimizer receives the query in algebraic expression and produces one efficient QEP (Query Evaluation Plan) through two main consecutive blocks: QEG (Query Evaluation Graph) generation and QEP generation. In QEG generation we construct a graph equivalent of the algebraic expression and then apply graph transformation rules to produce one efficient QEG. In QEP generation we receive the efficient QEG and do predicate ordering and approximation and then generate the efficient QEP. The QEP is a set of consecutive phases that must be executed in the specified order. Each phase consist of one or more primitive operations. All primitive operations that are in the same phase can be executed in parallel. We implemented the optimizer, a randomly spatial query generator and a simulated spatial database. The query generator produces random queries for the purpose of testing the optimizer. The simulated spatial database is a set of functions to simulate primitive spatial operations. They return the cost of the corresponding primitive operation according to input parameters. We put randomly generated queries to the optimizer, got the generated QEPs and put them to the spatial database simulator. We used the experimental results to discuss on the optimizer characteristics and performance. The optimizer was designed for databases with a very large number of spatial objects nevertheless most of the concepts we used can be applied to all spatial information systems."--Résumé abrégé par UMI
    • …
    corecore