2,448 research outputs found

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Enhanced free space detection in multiple lanes based on single CNN with scene identification

    Full text link
    Many systems for autonomous vehicles' navigation rely on lane detection. Traditional algorithms usually estimate only the position of the lanes on the road, but an autonomous control system may also need to know if a lane marking can be crossed or not, and what portion of space inside the lane is free from obstacles, to make safer control decisions. On the other hand, free space detection algorithms only detect navigable areas, without information about lanes. State-of-the-art algorithms use CNNs for both tasks, with significant consumption of computing resources. We propose a novel approach that estimates the free space inside each lane, with a single CNN. Additionally, adding only a small requirement concerning GPU RAM, we infer the road type, that will be useful for path planning. To achieve this result, we train a multi-task CNN. Then, we further elaborate the output of the network, to extract polygons that can be effectively used in navigation control. Finally, we provide a computationally efficient implementation, based on ROS, that can be executed in real time. Our code and trained models are available online.Comment: Will appear in the 2019 IEEE Intelligent Vehicles Symposium (IV 2019

    Urban scene description for a multi scale classication of high resolution imagery case of Cape Town urban Scene

    Get PDF
    Includes abstract.Includes bibliographical references.In this paper, a multi level contextual classification approach of the City of Cape Town, South Africa is presented. The methodology developed to identify the different objects using the multi level contextual technique comprised three important phases

    Monoplotting through Fusion of LIDAR Data and Low-Cost Digital Aerial Imagery

    Get PDF

    Automatic road network extraction in suburban areas from aerial images

    Get PDF
    [no abstract

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    The application of remote sensing to identify and measure sealed soil and vegetated surfaces in urban environments

    Get PDF
    Soil is an important non-renewable source. Its protection and allocation is critical to sustainable development goals. Urban development presents an important drive of soil loss due to sealing over by buildings, pavements and transport infrastructure. Monitoring sealed soil surfaces in urban environments is gaining increasing interest not only for scientific research studies but also for local planning and national authorities. The aim of this research was to investigate the extent to which automated classification methods can detect soil sealing in UK urban environments, by remote sensing. The objectives include development of object-based classification methods, using two types of earth observation data, and evaluation by comparison with manual aerial photo interpretation techniques. Four sample areas within the city of Cambridge were used for the development of an object-based classification model. The acquired data was a true-colour aerial photography (0.125 m resolution) and a QuickBird satellite imagery (2.8 multi-spectral resolution). The classification scheme included the following land cover classes: sealed surfaces, vegetated surfaces, trees, bare soil and rail tracks. Shadowed areas were also identified as an initial class and attempts were made to reclassify them into the actual land cover type. The accuracy of the thematic maps was determined by comparison with polygons derived from manual air-photo interpretation; the average overall accuracy was 84%. The creation of simple binary maps of sealed vs. vegetated surfaces resulted in a statistically significant accuracy increase to 92%. The integration of ancillary data (OS MasterMap) into the object-based model did not improve the performance of the model (overall accuracy of 91%). The use of satellite data in the object-based model gave an overall accuracy of 80%, a 7% decrease compared to the aerial photography. Future investigation will explore whether the integration of elevation data will aid to discriminate features such as trees from other vegetation types. The use of colour infrared aerial photography should also be tested. Finally, the application of the object- based classification model into a different study area would test its transferability
    • 

    corecore