1,269 research outputs found

    Interferometric Synthetic Aperture RADAR and Radargrammetry towards the Categorization of Building Changes

    Get PDF
    The purpose of this work is the investigation of SAR techniques relying on multi image acquisition for fully automatic and rapid change detection analysis at building level. In particular, the benefits and limitations of a complementary use of two specific SAR techniques, InSAR and radargrammetry, in an emergency context are examined in term of quickness, globality and accuracy. The analysis is performed using spaceborne SAR data

    Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks

    Get PDF
    This paper addresses the highly challenging problem of automatically detecting man-made structures especially buildings in very high resolution (VHR) synthetic aperture radar (SAR) images. In this context, the paper has two major contributions: Firstly, it presents a novel and generic workflow that initially classifies the spaceborne TomoSAR point clouds − - generated by processing VHR SAR image stacks using advanced interferometric techniques known as SAR tomography (TomoSAR) − - into buildings and non-buildings with the aid of auxiliary information (i.e., either using openly available 2-D building footprints or adopting an optical image classification scheme) and later back project the extracted building points onto the SAR imaging coordinates to produce automatic large-scale benchmark labelled (buildings/non-buildings) SAR datasets. Secondly, these labelled datasets (i.e., building masks) have been utilized to construct and train the state-of-the-art deep Fully Convolution Neural Networks with an additional Conditional Random Field represented as a Recurrent Neural Network to detect building regions in a single VHR SAR image. Such a cascaded formation has been successfully employed in computer vision and remote sensing fields for optical image classification but, to our knowledge, has not been applied to SAR images. The results of the building detection are illustrated and validated over a TerraSAR-X VHR spotlight SAR image covering approximately 39 km2 ^2 − - almost the whole city of Berlin − - with mean pixel accuracies of around 93.84%Comment: Accepted publication in IEEE TGR

    Building change detection in Multitemporal very high resolution SAR images

    Get PDF

    Remote Sensing Object Detection Meets Deep Learning: A Meta-review of Challenges and Advances

    Full text link
    Remote sensing object detection (RSOD), one of the most fundamental and challenging tasks in the remote sensing field, has received longstanding attention. In recent years, deep learning techniques have demonstrated robust feature representation capabilities and led to a big leap in the development of RSOD techniques. In this era of rapid technical evolution, this review aims to present a comprehensive review of the recent achievements in deep learning based RSOD methods. More than 300 papers are covered in this review. We identify five main challenges in RSOD, including multi-scale object detection, rotated object detection, weak object detection, tiny object detection, and object detection with limited supervision, and systematically review the corresponding methods developed in a hierarchical division manner. We also review the widely used benchmark datasets and evaluation metrics within the field of RSOD, as well as the application scenarios for RSOD. Future research directions are provided for further promoting the research in RSOD.Comment: Accepted with IEEE Geoscience and Remote Sensing Magazine. More than 300 papers relevant to the RSOD filed were reviewed in this surve

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Integrating spatial and spectral information for automatic feature identification in high -resolution remotely sensed images

    Get PDF
    This research used image objects, instead of pixels, as the basic unit of analysis in high-resolution imagery. Thus, not only spectral radiance and texture were used in the analysis, but also spatial context. Furthermore, the automated identification of attributed objects is potentially useful for integrating remote sensing with a vector-based GIS.;A study area in Morgantown, WV was chosen as a site for the development and testing of automated feature extraction methods with high-resolution data. In the first stage of the analysis, edges were identified using texture. Experiments with simulated data indicated that a linear operator identified curved and sharp edges more accurately than square shaped operators. Areas with edges that formed a closed boundary were used to delineate sub-patches. In the region growing step, the similarities of all adjacent subpatches were examined using a multivariate Hotelling T2 test that draws on the classes\u27 covariance matrices. Sub-patches that were not sufficiently dissimilar were merged to form image patches.;Patches were then classified into seven classes: Building, Road, Forest, Lawn, Shadowed Vegetation, Water, and Shadow. Six classification methods were compared: the pixel-based ISODATA and maximum likelihood approaches, field-based ECHO, and region based maximum likelihood using patch means, a divergence index, and patch probability density functions (pdfs). Classification with the divergence index showed the lowest accuracy, a kappa index of 0.254. The highest accuracy, 0.783, was obtained from classification using the patch pdf. This classification also produced a visually pleasing product, with well-delineated objects and without the distracting salt-and-pepper effect of isolated misclassified pixels. The accuracies of classification with patch mean, pixel based maximum likelihood, ISODATA and ECHO were 0.735, 0.687, 0.610, and 0.605, respectively.;Spatial context was used to generate aggregate land cover information. An Urbanized Rate Index, defined based on the percentage of Building and Road area within a local window, was used to segment the image. Five summary landcover classes were identified from the Urbanized Rate segmentation and the image object classification: High Urbanized Rate and large building sizes, Intermediate Urbanized Rate and intermediate building sizes, Low urbanized rate and small building sizes, Forest, and Water

    Smart environment monitoring through micro unmanned aerial vehicles

    Get PDF
    In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection

    Statistical Fusion of Multi-aspect Synthetic Aperture Radar Data for Automatic Road Extraction

    Get PDF
    In this dissertation, a new statistical fusion for automatic road extraction from SAR images taken from different looking angles (i.e. multi-aspect SAR data) was presented. The main input to the fusion is extracted line features. The fusion is carried out on decision-level and is based on Bayesian network theory
    • …
    corecore