733 research outputs found

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges

    Deep Learning Approaches for Seagrass Detection in Multispectral Imagery

    Get PDF
    Seagrass forms the basis for critically important marine ecosystems. Seagrass is an important factor to balance marine ecological systems, and it is of great interest to monitor its distribution in different parts of the world. Remote sensing imagery is considered as an effective data modality based on which seagrass monitoring and quantification can be performed remotely. Traditionally, researchers utilized multispectral satellite images to map seagrass manually. Automatic machine learning techniques, especially deep learning algorithms, recently achieved state-of-the-art performances in many computer vision applications. This dissertation presents a set of deep learning models for seagrass detection in multispectral satellite images. It also introduces novel domain adaptation approaches to adapt the models for new locations and for temporal image series. In Chapter 3, I compare a deep capsule network (DCN) with a deep convolutional neural network (DCNN) for seagrass detection in high-resolution multispectral satellite images. These methods are tested on three satellite images in Florida coastal areas and obtain comparable performances. In addition, I also propose a few-shot deep learning strategy to transfer knowledge learned by DCN from one location to the others for seagrass detection. In Chapter 4, I develop a semi-supervised domain adaptation method to generalize a trained DCNN model to multiple locations for seagrass detection. First, the model utilizes a generative adversarial network (GAN) to align marginal distribution of data in the source domain to that in the target domain using unlabeled data from both domains. Second, it uses a few labeled samples from the target domain to align class-specific data distributions between the two. The model achieves the best results in 28 out of 36 scenarios as compared to other state-of-the-art domain adaptation methods. In Chapter 5, I develop a semantic segmentation method for seagrass detection in multispectral time-series images. First, I train a state-of-the-art image segmentation method using an active learning approach where I use the DCNN classifier in the loop. Then, I develop an unsupervised domain adaptation (UDA) algorithm to detect seagrass across temporal images. I also extend our unsupervised domain adaptation work for seagrass detection across locations. In Chapter 6, I present an automated bathymetry estimation model based on multispectral satellite images. Bathymetry refers to the depth of the ocean floor and contributes a predominant role in identifying marine species in seawater. Accurate bathymetry information of coastal areas will facilitate seagrass detection by reducing false positives because seagrass usually do not grow beyond a certain depth. However, bathymetry information of most parts of the world is obsolete or missing. Traditional bathymetry measurement systems require extensive labor efforts. I utilize an ensemble machine learning-based approach to estimate bathymetry based on a few in-situ sonar measurements and evaluate the proposed model in three coastal locations in Florida

    Mapping alpine treeline with high resolution imagery and LiDAR data in North Cascades National Park, Washington

    Get PDF
    We evaluated several approaches for the automated detection and mapping of trees and treeline in an alpine environment. Using multiple remote sensing platforms and software programs, we evaluated both pixel-based and object-based classification approaches in combination with high-resolution multispectral imagery and LiDAR-derived tree height data. The study area in North Cascades National Park included over 10,000 hectares of some of the most rugged terrain in the conterminous U.S. Through the use of the Normalized Difference Vegetation Index (NDVI), differences in illumination conditions created by steep slopes and tall trees were minimized. Data fusion of the multispectral imagery, NDVI, and LiDAR-derived tree height data produced the highest percent accuracies using both the pixel-based (88.4%) and the object-based classifications (92.9%). These results demonstrate that either method will produce an acceptable level of accuracy, and that the availability of a near-infrared band to calculate NDVI is extremely important. The NDVI used in conjunction with the multispectral imagery helped to minimize issues with shadows caused by rugged terrain. Furthermore, LiDAR-derived tree heights were used to augment classification routines to achieve even greater accuracy; where shadows were too dark to produce meaningful NDVI values, the LiDAR-derived tree height data was instrumental in helping to distinguish trees from other land cover types. Both the pixel-based and the object-based approaches hold considerable promise for automated mapping and monitoring of the treeline ecotone; however, the pixel-based approach may be superior because it is more straightforward and easily replicable compared to the object-based approach. These treeline mapping efforts will enhance future ecological treeline research by producing more accurate detections of trees and estimations of treeline position, and will be instrumental in building time series of imagery for future scientists conducting change detection studies at treeline

    Polarimetric Synthetic Aperture Radar

    Get PDF
    This open access book focuses on the practical application of electromagnetic polarimetry principles in Earth remote sensing with an educational purpose. In the last decade, the operations from fully polarimetric synthetic aperture radar such as the Japanese ALOS/PalSAR, the Canadian Radarsat-2 and the German TerraSAR-X and their easy data access for scientific use have developed further the research and data applications at L,C and X band. As a consequence, the wider distribution of polarimetric data sets across the remote sensing community boosted activity and development in polarimetric SAR applications, also in view of future missions. Numerous experiments with real data from spaceborne platforms are shown, with the aim of giving an up-to-date and complete treatment of the unique benefits of fully polarimetric synthetic aperture radar data in five different domains: forest, agriculture, cryosphere, urban and oceans

    A Multimodal Feature Selection Method for Remote Sensing Data Analysis Based on Double Graph Laplacian Diagonalization

    Get PDF
    When dealing with multivariate remotely sensed records collected by multiple sensors, an accurate selection of information at the data, feature, or decision level is instrumental in improving the scenes’ characterization. This will also enhance the system’s efficiency and provide more details on modeling the physical phenomena occurring on the Earth’s surface. In this article, we introduce a flexible and efficient method based on graph Laplacians for information selection at different levels of data fusion. The proposed approach combines data structure and information content to address the limitations of existing graph-Laplacian-based methods in dealing with heterogeneous datasets. Moreover, it adapts the selection to each homogenous area of the considered images according to their underlying properties. Experimental tests carried out on several multivariate remote sensing datasets show the consistency of the proposed approach

    Clearing the Clouds: Extracting 3D information from amongst the noise

    Get PDF
    Advancements permitting the rapid extraction of 3D point clouds from a variety of imaging modalities across the global landscape have provided a vast collection of high fidelity digital surface models. This has created a situation with unprecedented overabundance of 3D observations which greatly outstrips our current capacity to manage and infer actionable information. While years of research have removed some of the manual analysis burden for many tasks, human analysis is still a cornerstone of 3D scene exploitation. This is especially true for complex tasks which necessitate comprehension of scale, texture and contextual learning. In order to ameliorate the interpretation burden and enable scientific discovery from this volume of data, new processing paradigms are necessary to keep pace. With this context, this dissertation advances fundamental and applied research in 3D point cloud data pre-processing and deep learning from a variety of platforms. We show that the representation of 3D point data is often not ideal and sacrifices fidelity, context or scalability. First ground scanning terrestrial LIght Detection And Ranging (LiDAR) models are shown to have an inherent statistical bias, and present a state of the art method for correcting this, while preserving data fidelity and maintaining semantic structure. This technique is assessed in the dense canopy of Micronesia, with our technique being the best at retaining high levels of detail under extreme down-sampling (\u3c 1%). Airborne systems are then explored with a method which is presented to pre-process data to preserve a global contrast and semantic content in deep learners. This approach is validated with a building footprint detection task from airborne imagery captured in Eastern TN from the 3D Elevation Program (3DEP), our approach was found to achieve significant accuracy improvements over traditional techniques. Finally, topography data spanning the globe is used to assess past and previous global land cover change. Utilizing Shuttle Radar Topography Mission (SRTM) and Moderate Resolution Imaging Spectroradiometer (MODIS) data, paired with the airborne preprocessing technique described previously, a model for predicting land-cover change from topography observations is described. The culmination of these efforts have the potential to enhance the capabilities of automated 3D geospatial processing, substantially lightening the burden of analysts, with implications improving our responses to global security, disaster response, climate change, structural design and extraplanetary exploration

    Change Detection Methods for Remote Sensing in the Last Decade: A Comprehensive Review

    Get PDF
    Change detection is an essential and widely utilized task in remote sensing that aims to detect and analyze changes occurring in the same geographical area over time, which has broad applications in urban development, agricultural surveys, and land cover monitoring. Detecting changes in remote sensing images is a complex challenge due to various factors, including variations in image quality, noise, registration errors, illumination changes, complex landscapes, and spatial heterogeneity. In recent years, deep learning has emerged as a powerful tool for feature extraction and addressing these challenges. Its versatility has resulted in its widespread adoption for numerous image-processing tasks. This paper presents a comprehensive survey of significant advancements in change detection for remote sensing images over the past decade. We first introduce some preliminary knowledge for the change detection task, such as problem definition, datasets, evaluation metrics, and transformer basics, as well as provide a detailed taxonomy of existing algorithms from three different perspectives: algorithm granularity, supervision modes, and frameworks in the Methodology section. This survey enables readers to gain systematic knowledge of change detection tasks from various angles. We then summarize the state-of-the-art performance on several dominant change detection datasets, providing insights into the strengths and limitations of existing algorithms. Based on our survey, some future research directions for change detection in remote sensing are well identified. This survey paper sheds some light the topic for the community and will inspire further research efforts in the change detection task.</jats:p

    Advanced Processing of Multispectral Satellite Data for Detecting and Learning Knowledge-based Features of Planetary Surface Anomalies

    Get PDF
    abstract: The marked increase in the inflow of remotely sensed data from satellites have trans- formed the Earth and Space Sciences to a data rich domain creating a rich repository for domain experts to analyze. These observations shed light on a diverse array of disciplines ranging from monitoring Earth system components to planetary explo- ration by highlighting the expected trend and patterns in the data. However, the complexity of these patterns from local to global scales, coupled with the volume of this ever-growing repository necessitates advanced techniques to sequentially process the datasets to determine the underlying trends. Such techniques essentially model the observations to learn characteristic parameters of data-generating processes and highlight anomalous planetary surface observations to help domain scientists for making informed decisions. The primary challenge in defining such models arises due to the spatio-temporal variability of these processes. This dissertation introduces models of multispectral satellite observations that sequentially learn the expected trend from the data by extracting salient features of planetary surface observations. The main objectives are to learn the temporal variability for modeling dynamic processes and to build representations of features of interest that is learned over the lifespan of an instrument. The estimated model parameters are then exploited in detecting anomalies due to changes in land surface reflectance as well as novelties in planetary surface landforms. A model switching approach is proposed that allows the selection of the best matched representation given the observations that is designed to account for rate of time-variability in land surface. The estimated parameters are exploited to design a change detector, analyze the separability of change events, and form an expert-guided representation of planetary landforms for prioritizing the retrieval of scientifically relevant observations with both onboard and post-downlink applications.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Coastal Eye: Monitoring Coastal Environments Using Lightweight Drones

    Get PDF
    Monitoring coastal environments is a challenging task. This is because of both the logistical demands involved with in-situ data collection and the dynamic nature of the coastal zone, where multiple processes operate over varying spatial and temporal scales. Remote sensing products derived from spaceborne and airborne platforms have proven highly useful in the monitoring of coastal ecosystems, but often they fail to capture fine scale processes and there remains a lack of cost-effective and flexible methods for coastal monitoring at these scales. Proximal sensing technology such as lightweight drones and kites has greatly improved the ability to capture fine spatial resolution data at user-dictated visit times. These approaches are democratising, allowing researchers and managers to collect data in locations and at defined times themselves. In this thesis I develop our scientific understanding of the application of proximal sensing within coastal environments. The two critical review pieces consolidate disparate information on the application of kites as a proximal sensing platform, and the often overlooked hurdles of conducting drone operations in challenging environments. The empirical work presented then tests the use of this technology in three different coastal environments spanning the land-sea interface. Firstly, I use kite aerial photography and uncertainty-assessed structure-from-motion multi-view stereo (SfM-MVS) processing to track changes in coastal dunes over time. I report that sub-decimetre changes (both erosion and accretion) can be detected with this methodology. Secondly, I used lightweight drones to capture fine spatial resolution optical data of intertidal seagrass meadows. I found that estimations of plant cover were more similar to in-situ measures in sparsely populated than densely populated meadows. Lastly, I developed a novel technique utilising lightweight drones and SfM-MVS to measure benthic structural complexity in tropical coral reefs. I found that structural complexity measures were obtainable from SfM-MVS derived point clouds, but that the technique was influenced by glint type artefacts in the image data. Collectively, this work advances the knowledge of proximal sensing in the coastal zone, identifying both the strengths and weaknesses of its application across several ecosystems.Natural Environment Research Council (NERC
    • …
    corecore