2,615 research outputs found

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    Ship Detection and Segmentation using Image Correlation

    Get PDF
    There have been intensive research interests in ship detection and segmentation due to high demands on a wide range of civil applications in the last two decades. However, existing approaches, which are mainly based on statistical properties of images, fail to detect smaller ships and boats. Specifically, known techniques are not robust enough in view of inevitable small geometric and photometric changes in images consisting of ships. In this paper a novel approach for ship detection is proposed based on correlation of maritime images. The idea comes from the observation that a fine pattern of the sea surface changes considerably from time to time whereas the ship appearance basically keeps unchanged. We want to examine whether the images have a common unaltered part, a ship in this case. To this end, we developed a method - Focused Correlation (FC) to achieve robustness to geometric distortions of the image content. Various experiments have been conducted to evaluate the effectiveness of the proposed approach.Comment: 8 pages, to be published in proc. of conference IEEE SMC 201

    Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Get PDF
    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.1146Ysciescopu

    High resolution radargrammetry with COSMO-SkyMed, TerraSAR-X and RADARSAT-2 imagery: development and implementation of an image orientation model for Digital Surface Model generation

    Get PDF
    Digital Surface and Terrain Models (DSM/DTM) have large relevance in several territorial applications, such as topographic mapping, monitoring engineering, geology, security, land planning and management of Earth's resources. The satellite remote sensing data offer the opportunity to have continuous observation of Earth's surface for territorial application, with short acquisition and revisit times. Meeting these requirements, the SAR (Synthetic Aperture Radar) high resolution satellite imagery could offer night-and-day and all-weather functionality (clouds, haze and rain penetration). Two different methods may be used in order to generate DSMs from SAR data: the interferometric and the radargrammetric approaches. The radargrammetry uses only the intensity information of the SAR images and reconstructs the 3D information starting from a couple of images similarly to photogrammetry. Radargrammetric DSM extraction procedure consists of two basic steps: the stereo pair orientation and the image matching for the automatic detection of homologous points. The goal of this work is the definition and the implementation of a geometric model in order to orientate SAR imagery in zero Doppler geometry. The radargrammetric model implemented in SISAR (Software per Immagini Satellitari ad Alta Risoluzione - developed at the Geodesy and Geomatic Division - University of Rome "La Sapienza") is based on the equation of radar target acquisition and zero Doppler focalization Moreover a tool for the SAR Rational Polynomial Coefficients (RPCs) generation has been implemented in SISAR software, similarly to the one already developed for the optical sensors. The possibility to generate SAR RPCs starting from a radargrammetric model sounds of particular interest since, at present, the most part of SAR imagery is not supplied with RPCs, although the RPFs model is available in several commercial software. Only RADARSAT-2 data are supplied with vendors RPCs. To test the effectiveness of the implemented RPCs generation tool and the SISAR radargrammetric orientation model the reference results were computed: the stereo pairs were orientated with the two model. The tests were carried out on several test site using COSMO-SkyMed, TerraSAR-X and RADARSAT-2 data. Moreover, to evaluate the advantages and the different accuracy between the orientation models computed without GCPs and the orientation model with GCPs a Monte Carlo test was computed. At last, to define the real effectiveness of radargrammetric technique for DSM extraction and to compare the radrgrammetric tool implemented in a commercial software PCI-Geomatica v. 2012 and SISAR software, the images acquired on Beauport test site were used for DSM extraction. It is important underline that several test were computed. Part of this tests were carried out under the supervision of Prof. Thierry Toutin at CCRS (Canada Centre of Remote Sensing) where the PCI-Geomatica orientation model was developed, in order to check the better parameters solution to extract radargrammetric DSMs. In conclusion, the results obtained are representative of the geometric potentialities of SAR stereo pairs as regards 3D surface reconstruction

    New techniques for the automatic registration of microwave and optical remotely sensed images

    Get PDF
    Remote sensing is a remarkable tool for monitoring and mapping the land and ocean surfaces of the Earth. Recently, with the launch of many new Earth observation satellites, there has been an increase in the amount of data that is being acquired, and the potential for mapping is greater than ever before. Furthermore, sensors which are currently operational are acquiring data in many different parts of the electromagnetic spectrum. It has long been known that by combining images that have been acquired at different wavelengths, or at different times, the ability to detect and recognise features on the ground is greatly increased. This thesis investigates the possibilities for automatically combining radar and optical remotely sensed images. The process of combining images, known as data integration, is a two step procedure: geometric integration (image registration) and radiometric integration (data fusion). Data fusion is essentially an automatic procedure, but the problems associated with automatic registration of multisource images have not, in general, been resolved. This thesis proposes a method of automatic image registration based on the extraction and matching of common features which are visible in both images. The first stage of the registration procedure uses patches as the matching primitives in order to determine the approximate alignment of the images. The second stage refines the registration results by matching edge features. Throughout the development of the proposed registration algorithm, reliability, robustness and automation were always considered priorities. Tests with both small images (512x512 pixels) and full scene images showed that the algorithm could successfully register images to an acceptable level of accuracy

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability
    corecore