483 research outputs found

    Voronoi image segmentation and its applications to geoinformatics

    Get PDF
    As various geospatial images are available for analysis, there is a strong need for an intelligent geospatial image processing method. Segmenting and districting digital images is a core process and is of great importance in many geo-related applications. We propose a flexible image segmentation framework based on generalized Voronoi diagrams through Euclidean distance transforms. We introduce a three-scan algorithm that segments images in O(N) time when N is the number of pixels. The algorithm is capable of handling generators of complex types (point, line and area), Minkowski metrics and different weights. This paper also provides applications of the proposed method in various geoinformation datasets. Illustrated examples demonstrate the usefulness and robustness of our proposed method

    Inferring Unusual Crowd Events From Mobile Phone Call Detail Records

    Full text link
    The pervasiveness and availability of mobile phone data offer the opportunity of discovering usable knowledge about crowd behaviors in urban environments. Cities can leverage such knowledge in order to provide better services (e.g., public transport planning, optimized resource allocation) and safer cities. Call Detail Record (CDR) data represents a practical data source to detect and monitor unusual events considering the high level of mobile phone penetration, compared with GPS equipped and open devices. In this paper, we provide a methodology that is able to detect unusual events from CDR data that typically has low accuracy in terms of space and time resolution. Moreover, we introduce a concept of unusual event that involves a large amount of people who expose an unusual mobility behavior. Our careful consideration of the issues that come from coarse-grained CDR data ultimately leads to a completely general framework that can detect unusual crowd events from CDR data effectively and efficiently. Through extensive experiments on real-world CDR data for a large city in Africa, we demonstrate that our method can detect unusual events with 16% higher recall and over 10 times higher precision, compared to state-of-the-art methods. We implement a visual analytics prototype system to help end users analyze detected unusual crowd events to best suit different application scenarios. To the best of our knowledge, this is the first work on the detection of unusual events from CDR data with considerations of its temporal and spatial sparseness and distinction between user unusual activities and daily routines.Comment: 18 pages, 6 figure

    Machining-based coverage path planning for automated structural inspection

    Get PDF
    The automation of robotically delivered nondestructive evaluation inspection shares many aims with traditional manufacture machining. This paper presents a new hardware and software system for automated thickness mapping of large-scale areas, with multiple obstacles, by employing computer-aided drawing (CAD)/computer-aided manufacturing (CAM)-inspired path planning to implement control of a novel mobile robotic thickness mapping inspection vehicle. A custom postprocessor provides the necessary translation from CAM numeric code through robotic kinematic control to combine and automate the overall process. The generalized steps to implement this approach for any mobile robotic platform are presented herein and applied, in this instance, to a novel thickness mapping crawler. The inspection capabilities of the system were evaluated on an indoor mock-inspection scenario, within a motion tracking cell, to provide quantitative performance figures for positional accuracy. Multiple thickness defects simulating corrosion features on a steel sample plate were combined with obstacles to be avoided during the inspection. A minimum thickness mapping error of 0.21 mm and a mean path error of 4.41 mm were observed for a 2 m² carbon steel sample of 10-mm nominal thickness. The potential of this automated approach has benefits in terms of repeatability of area coverage, obstacle avoidance, and reduced path overlap, all of which directly lead to increased task efficiency and reduced inspection time of large structural assets

    Behavioural strategy for indoor mobile robot navigation in dynamic environments

    Get PDF
    PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging and practical issue in a cluttered indoor environment, such as a hospital or factory, where there are many static and moving objects, including humans and other robots, all of which trying to complete their own specific tasks; some objects may be moving in a similar direction to the robot, whereas others may be moving in the opposite direction. The key requirement for any mobile robot is to avoid colliding with any object which may prevent it from reaching its goal, or as a consequence bring harm to any individual within its workspace. This challenge is further complicated by unobserved objects suddenly appearing in the robots path, particularly when the robot crosses a corridor or an open doorway. Therefore the mobile robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions. In this project, a hybrid control architecture has been designed to navigate within dynamic environments. The control system includes three levels namely: deliberative, intermediate and reactive, which work together to achieve short, fast and safe navigation. The deliberative level creates a short and safe path from the current position of the mobile robot to its goal using the wavefront algorithm, estimates the current location of the mobile robot, and extracts the region from which unobserved objects may appear. The intermediate level links the deliberative level and the reactive level, that includes several behaviours for implementing the global path in such a way to avoid any collision. In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the sensor data, estimate their speeds, and then regular its speed and direction to minimize the collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is considered an easy and simple method for avoiding dynamic obstacles, whilst the collision cone principle is used to detect the collision situation between two circular-shaped objects. However the VO approach has two challenges when applied in indoor environments. The first challenge is extraction of collision cones of non-circular objects from sensor data, in which applying fitting circle methods generally produces large and inaccurate collision cones especially for line-shaped obstacle such as walls. The second challenge is that the mobile robot cannot sometimes move to its goal because all its velocities to the goal are located within collision cones. In this project, a method has been demonstrated to extract the colliii sion cones of circular and non-circular objects using a laser sensor, where the obstacle size and the collision time are considered to weigh the robot velocities. In addition the principle of the virtual obstacle was proposed to minimize the collision risk with unobserved moving obstacles. The simulation and experiments using the proposed control system on a Pioneer mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles. Furthermore the mobile robot was able to reach its target within an indoor environment without causing any collision or missing the target

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    Searching and mining in enriched geo-spatial data

    Get PDF
    The emergence of new data collection mechanisms in geo-spatial applications paired with a heightened tendency of users to volunteer information provides an ever-increasing flow of data of high volume, complex nature, and often associated with inherent uncertainty. Such mechanisms include crowdsourcing, automated knowledge inference, tracking, and social media data repositories. Such data bearing additional information from multiple sources like probability distributions, text or numerical attributes, social context, or multimedia content can be called multi-enriched. Searching and mining this abundance of information holds many challenges, if all of the data's potential is to be released. This thesis addresses several major issues arising in that field, namely path queries using multi-enriched data, trend mining in social media data, and handling uncertainty in geo-spatial data. In all cases, the developed methods have made significant contributions and have appeared in or were accepted into various renowned international peer-reviewed venues. A common use of geo-spatial data is path queries in road networks where traditional methods optimise results based on absolute and ofttimes singular metrics, i.e., finding the shortest paths based on distance or the best trade-off between distance and travel time. Integrating additional aspects like qualitative or social data by enriching the data model with knowledge derived from sources as mentioned above allows for queries that can be issued to fit a broader scope of needs or preferences. This thesis presents two implementations of incorporating multi-enriched data into road networks. In one case, a range of qualitative data sources is evaluated to gain knowledge about user preferences which is subsequently matched with locations represented in a road network and integrated into its components. Several methods are presented for highly customisable path queries that incorporate a wide spectrum of data. In a second case, a framework is described for resource distribution with reappearance in road networks to serve one or more clients, resulting in paths that provide maximum gain based on a probabilistic evaluation of available resources. Applications for this include finding parking spots. Social media trends are an emerging research area giving insight in user sentiment and important topics. Such trends consist of bursts of messages concerning a certain topic within a time frame, significantly deviating from the average appearance frequency of the same topic. By investigating the dissemination of such trends in space and time, this thesis presents methods to classify trend archetypes to predict future dissemination of a trend. Processing and querying uncertain data is particularly demanding given the additional knowledge required to yield results with probabilistic guarantees. Since such knowledge is not always available and queries are not easily scaled to larger datasets due to the #P-complete nature of the problem, many existing approaches reduce the data to a deterministic representation of its underlying model to eliminate uncertainty. However, data uncertainty can also provide valuable insight into the nature of the data that cannot be represented in a deterministic manner. This thesis presents techniques for clustering uncertain data as well as query processing, that take the additional information from uncertainty models into account while preserving scalability using a sampling-based approach, while previous approaches could only provide one of the two. The given solutions enable the application of various existing clustering techniques or query types to a framework that manages the uncertainty.Das Erscheinen neuer Methoden zur Datenerhebung in räumlichen Applikationen gepaart mit einer erhöhten Bereitschaft der Nutzer, Daten über sich preiszugeben, generiert einen stetig steigenden Fluss von Daten in großer Menge, komplexer Natur, und oft gepaart mit inhärenter Unsicherheit. Beispiele für solche Mechanismen sind Crowdsourcing, automatisierte Wissensinferenz, Tracking, und Daten aus sozialen Medien. Derartige Daten, angereichert mit mit zusätzlichen Informationen aus verschiedenen Quellen wie Wahrscheinlichkeitsverteilungen, Text- oder numerische Attribute, sozialem Kontext, oder Multimediainhalten, werden als multi-enriched bezeichnet. Suche und Datamining in dieser weiten Datenmenge hält viele Herausforderungen bereit, wenn das gesamte Potenzial der Daten genutzt werden soll. Diese Arbeit geht auf mehrere große Fragestellungen in diesem Feld ein, insbesondere Pfadanfragen in multi-enriched Daten, Trend-mining in Daten aus sozialen Netzwerken, und die Beherrschung von Unsicherheit in räumlichen Daten. In all diesen Fällen haben die entwickelten Methoden signifikante Forschungsbeiträge geleistet und wurden veröffentlicht oder angenommen zu diversen renommierten internationalen, von Experten begutachteten Konferenzen und Journals. Ein gängiges Anwendungsgebiet räumlicher Daten sind Pfadanfragen in Straßennetzwerken, wo traditionelle Methoden die Resultate anhand absoluter und oft auch singulärer Maße optimieren, d.h., der kürzeste Pfad in Bezug auf die Distanz oder der beste Kompromiss zwischen Distanz und Reisezeit. Durch die Integration zusätzlicher Aspekte wie qualitativer Daten oder Daten aus sozialen Netzwerken als Anreicherung des Datenmodells mit aus diesen Quellen abgeleitetem Wissen werden Anfragen möglich, die ein breiteres Spektrum an Anforderungen oder Präferenzen erfüllen. Diese Arbeit präsentiert zwei Ansätze, solche multi-enriched Daten in Straßennetze einzufügen. Zum einen wird eine Reihe qualitativer Datenquellen ausgewertet, um Wissen über Nutzerpräferenzen zu generieren, welches darauf mit Örtlichkeiten im Straßennetz abgeglichen und in das Netz integriert wird. Diverse Methoden werden präsentiert, die stark personalisierbare Pfadanfragen ermöglichen, die ein weites Spektrum an Daten mit einbeziehen. Im zweiten Fall wird ein Framework präsentiert, das eine Ressourcenverteilung im Straßennetzwerk modelliert, bei der einmal verbrauchte Ressourcen erneut auftauchen können. Resultierende Pfade ergeben einen maximalen Ertrag basieren auf einer probabilistischen Evaluation der verfügbaren Ressourcen. Eine Anwendung ist die Suche nach Parkplätzen. Trends in sozialen Medien sind ein entstehendes Forscchungsgebiet, das Einblicke in Benutzerverhalten und wichtige Themen zulässt. Solche Trends bestehen aus großen Mengen an Nachrichten zu einem bestimmten Thema innerhalb eines Zeitfensters, so dass die Auftrittsfrequenz signifikant über den durchschnittlichen Level liegt. Durch die Untersuchung der Fortpflanzung solcher Trends in Raum und Zeit präsentiert diese Arbeit Methoden, um Trends nach Archetypen zu klassifizieren und ihren zukünftigen Weg vorherzusagen. Die Anfragebearbeitung und Datamining in unsicheren Daten ist besonders herausfordernd, insbesondere im Hinblick auf das notwendige Zusatzwissen, um Resultate mit probabilistischen Garantien zu erzielen. Solches Wissen ist nicht immer verfügbar und Anfragen lassen sich aufgrund der \P-Vollständigkeit des Problems nicht ohne Weiteres auf größere Datensätze skalieren. Dennoch kann Datenunsicherheit wertvollen Einblick in die Struktur der Daten liefern, der mit deterministischen Methoden nicht erreichbar wäre. Diese Arbeit präsentiert Techniken zum Clustering unsicherer Daten sowie zur Anfragebearbeitung, die die Zusatzinformation aus dem Unsicherheitsmodell in Betracht ziehen, jedoch gleichzeitig die Skalierbarkeit des Ansatzes auf große Datenmengen sicherstellen

    A Geospatial Based Decision Framework for Extending MARSSIM Regulatory Principles into the Subsurface

    Get PDF
    The Multi-Agency Radiological Site Survey Investigation Manual (MARSSIM) is a regulatory guidance document regarding compliance evaluation of radiologically contaminated soils and buildings (USNRC, 2000). Compliance is determined by comparing radiological measurements to established limits using a combination of hypothesis testing and scanning measurements. Scanning allows investigators to identify localized pockets of contamination missed during sampling and allows investigators to assess radiological exposure at different spatial scales. Scale is important in radiological dose assessment as regulatory limits can vary with the size of the contaminated area and sites are often evaluated at more than one scale (USNRC, 2000). Unfortunately, scanning is not possible in the subsurface and direct application of MARSSIM breaks down. This dissertation develops a subsurface decision framework called the Geospatial Extension to MARSSIM (GEM) to provide multi-scale subsurface decision support in the absence of scanning technologies. Based on geostatistical simulations of radiological activity, the GEM recasts the decision rule as a multi-scale, geospatial decision rule called the regulatory limit rule (RLR). The RLR requires simultaneous compliance with all scales and depths of interest at every location throughout the site. The RLR is accompanied by a compliance test called the stochastic conceptual site model (SCSM). For those sites that fail compliance, a remedial design strategy is developed called the Multi-scale Remedial Design Model (MrDM) that spatially indicates volumes requiring remedial action. The MrDM is accompanied by a sample design strategy known as the Multi-scale Remedial Sample Design Model (MrsDM) that refines this remedial action volume through careful placement of new sample locations. Finally, a new sample design called “check and cover” is presented that can support early sampling efforts by directly using prior knowledge about where contamination may exist. This dissertation demonstrates how these tools are used within an environmental investigation and situates the GEM within existing regulatory methods with an emphasis on the Environmental Protection Agency’s Triad method which recognizes and encourages the use of advanced decision methods. The GEM is implemented within the Spatial Analysis and Decision Assistance (SADA) software and applied to a hypothetical radiologically contaminated site
    corecore