4 research outputs found

    Moving Vehicle Information Extraction from Single-Pass WorldView-2 Imagery Based on ERGAS-SNS Analysis

    No full text
    Due to the fact that WorldView-2 (WV2) has a small time lag while acquiring images from panchromatic (PAN) and two multispectral (MS1 and MS2) sensors, a moving vehicle is located at different positions in three image bands. Consequently, such displacement can be utilized to identify moving vehicles, and vehicle information, such as speed and direction can be estimated. In this paper, we focus on moving vehicle detection according to the displacement information and present a novel processing chain. The vehicle locations are extracted by an improved morphological detector based on the vehicle’s shape properties. To make better use of the time lag between MS1 and MS2, a band selection process is performed by both visual inspection and quantitative analysis. Moreover, three spectral-neighbor band pairs, which have a major contribution to vehicle identification, are selected. In addition, we improve the spatial and spectral analysis method by incorporating local ERGAS index analysis (ERGAS-SNS) to identify moving vehicles. The experimental results on WV2 images showed that the correctness, completeness and quality rates of the proposed method were about 94%, 91% and 86%, respectively. Thus, the proposed method has good performance for moving vehicle detection and information extraction

    Priorities to advance monitoring of ecosystem services using Earth observation

    Get PDF
    Managing ecosystem services in the context of global sustainability policies requires reliable monitoring mechanisms. While satellite Earth observation offers great promise to support this need, significant challenges remain in quantifying connections between ecosystem functions, ecosystem services, and human well-being benefits. Here, we provide a framework showing how Earth observation together with socioeconomic information and model-based analysis can support assessments of ecosystem service supply, demand, and benefit, and illustrate this for three services. We argue that the full potential of Earth observation is not yet realized in ecosystem service studies. To provide guidance for priority setting and to spur research in this area, we propose five priorities to advance the capabilities of Earth observation-based monitoring of ecosystem services

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges
    corecore