1,038 research outputs found

    Automatic building detection from aerial images for mobile robot mapping

    Get PDF
    To improve mobile robot outdoor mapping, information about the shape and location of buildings is of interest. This paper describes a system for automatic detection of buildings in aerial images taken from a nadir view. The system builds two types of independent hypotheses based on the image contents. A segmentation process implemented as an ensemble of SOMs (Self Organizing Maps) is trained and used to create a segmented image showing different types of roofs, vegetation and sea. A second type of hypotheses is based on an edge image produced from the aerial photo. A line extraction process uses the edge image as input and extracts lines from it. From these edges, corners and rectangles that represent buildings are constructed. A classification process uses the information from both hypotheses to determine whether the rectangles are buildings, unsure buildings or unknown objects

    VECTOR MAP GENERATION FROM AERIAL IMAGERY USING DEEP LEARNING

    Get PDF
    We propose a simple yet efficient technique to leverage semantic segmentation model to extract and separate individual buildings in densely compacted areas using medium resolution satellite/UAV orthoimages. We adopted standard UNET architecture, additionally added batch normalization layer after every convolution, to label every pixel in the image. The result obtained is fed into proposed post-processing pipeline for separating connected binary blobs of buildings and converting it into GIS layer for further analysis as well as for generating 3D buildings. The proposed algorithm extracts building footprints from aerial images, transform semantic to instance map and convert it into GIS layers to generate 3D buildings. We integrated this method in Indshine’s cloud platform to speed up the process of digitization, generate automatic 3D models, and perform the geospatial analysis. Our network achieved ∼70% Dice coefficient for the segmentation process

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Automated Visual Database Creation For A Ground Vehicle Simulator

    Get PDF
    This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter

    Road Information Extraction from Mobile LiDAR Point Clouds using Deep Neural Networks

    Get PDF
    Urban roads, as one of the essential transportation infrastructures, provide considerable motivations for rapid urban sprawl and bring notable economic and social benefits. Accurate and efficient extraction of road information plays a significant role in the development of autonomous vehicles (AVs) and high-definition (HD) maps. Mobile laser scanning (MLS) systems have been widely used for many transportation-related studies and applications in road inventory, including road object detection, pavement inspection, road marking segmentation and classification, and road boundary extraction, benefiting from their large-scale data coverage, high surveying flexibility, high measurement accuracy, and reduced weather sensitivity. Road information from MLS point clouds is significant for road infrastructure planning and maintenance, and have an important impact on transportation-related policymaking, driving behaviour regulation, and traffic efficiency enhancement. Compared to the existing threshold-based and rule-based road information extraction methods, deep learning methods have demonstrated superior performance in 3D road object segmentation and classification tasks. However, three main challenges remain that impede deep learning methods for precisely and robustly extracting road information from MLS point clouds. (1) Point clouds obtained from MLS systems are always in large-volume and irregular formats, which has presented significant challenges for managing and processing such massive unstructured points. (2) Variations in point density and intensity are inevitable because of the profiling scanning mechanism of MLS systems. (3) Due to occlusions and the limited scanning range of onboard sensors, some road objects are incomplete, which considerably degrades the performance of threshold-based methods to extract road information. To deal with these challenges, this doctoral thesis proposes several deep neural networks that encode inherent point cloud features and extract road information. These novel deep learning models have been tested by several datasets to deliver robust and accurate road information extraction results compared to state-of-the-art deep learning methods in complex urban environments. First, an end-to-end feature extraction framework for 3D point cloud segmentation is proposed using dynamic point-wise convolutional operations at multiple scales. This framework is less sensitive to data distribution and computational power. Second, a capsule-based deep learning framework to extract and classify road markings is developed to update road information and support HD maps. It demonstrates the practical application of combining capsule networks with hierarchical feature encodings of georeferenced feature images. Third, a novel deep learning framework for road boundary completion is developed using MLS point clouds and satellite imagery, based on the U-shaped network and the conditional deep convolutional generative adversarial network (c-DCGAN). Empirical evidence obtained from experiments compared with state-of-the-art methods demonstrates the superior performance of the proposed models in road object semantic segmentation, road marking extraction and classification, and road boundary completion tasks

    ROAD NETWORK IDENTIFICATION AND EXTRACTION IN SATELLITE IMAGERY USING OTSU'S METHOD AND CONNECTED COMPONENT ANALYSIS

    Get PDF
    As the high resolution satellite images have become easily available, this has motivated researchers for searching advanced methods for object detection and extraction from satellite images. Roads are important curvilinear object as they are a used in urban planning, emergency response, route planning etc. Automatic road detection from satellite images has now become an important topic in photogrammetry with the advances in remote sensing technology. In this paper, a method for road detection and extraction of satellite images has been introduced. This method uses the concept of histogram equalization, Otsu's method of image segmentation, connected component analysis and morphological operations. The aim of this paper is to discover the potential of high resolution satellite images for detecting and extracting the road network in a robust manner

    URA*: Uncertainty-aware Path Planning using Image-based Aerial-to-Ground Traversability Estimation for Off-road Environments

    Full text link
    A major challenge with off-road autonomous navigation is the lack of maps or road markings that can be used to plan a path for autonomous robots. Classical path planning methods mostly assume a perfectly known environment without accounting for the inherent perception and sensing uncertainty from detecting terrain and obstacles in off-road environments. Recent work in computer vision and deep neural networks has advanced the capability of terrain traversability segmentation from raw images; however, the feasibility of using these noisy segmentation maps for navigation and path planning has not been adequately explored. To address this problem, this research proposes an uncertainty-aware path planning method, URA* using aerial images for autonomous navigation in off-road environments. An ensemble convolutional neural network (CNN) model is first used to perform pixel-level traversability estimation from aerial images of the region of interest. The traversability predictions are represented as a grid of traversal probability values. An uncertainty-aware planner is then applied to compute the best path from a start point to a goal point given these noisy traversal probability estimates. The proposed planner also incorporates replanning techniques to allow rapid replanning during online robot operation. The proposed method is evaluated on the Massachusetts Road Dataset, the DeepGlobe dataset, as well as a dataset of aerial images from off-road proving grounds at Mississippi State University. Results show that the proposed image segmentation and planning methods outperform conventional planning algorithms in terms of the quality and feasibility of the initial path, as well as the quality of replanned paths
    • …
    corecore