26 research outputs found

    ISLAND: Informing Brightness and Surface Temperature Through a Land Cover-based Interpolator

    Full text link
    Cloud occlusion is a common problem in the field of remote sensing, particularly for thermal infrared imaging. Remote sensing thermal instruments onboard operational satellites are supposed to enable frequent and high-resolution observations over land; unfortunately, clouds adversely affect thermal signals by blocking outgoing longwave radiation emission from Earth's surface, interfering with the retrieved ground emission temperature. Such cloud contamination severely reduces the set of serviceable thermal images for downstream applications, making it impractical to perform intricate time-series analysis of land surface temperature (LST). In this paper, we introduce a novel method to remove cloud occlusions from Landsat 8 LST images. We call our method ISLAND, an acronym for Informing Brightness and Surface Temperature Through a Land Cover-based Interpolator. Our approach uses thermal infrared images from Landsat 8 (at 30 m resolution with 16-day revisit cycles) and the NLCD land cover dataset. Inspired by Tobler's first law of Geography, ISLAND predicts occluded brightness temperature and LST through a set of spatio-temporal filters that perform distance-weighted spatio-temporal interpolation. A critical feature of ISLAND is that the filters are land cover-class aware, making it particularly advantageous in complex urban settings with heterogeneous land cover types and distributions. Through qualitative and quantitative analysis, we show that ISLAND achieves robust reconstruction performance across a variety of cloud occlusion and surface land cover conditions, and with a high spatio-temporal resolution. We provide a public dataset of 20 U.S. cities with pre-computed ISLAND thermal infrared and LST outputs. Using several case studies, we demonstrate that ISLAND opens the door to a multitude of high-impact urban and environmental applications across the continental United States.Comment: 22 pages, 9 figure

    Cloud removal from optical remote sensing images

    Full text link
    Optical remote sensing images used for Earth surface observations are constantly contaminated by cloud cover. Clouds dynamically affect the applications of optical data and increase the difficulty of image analysis. Therefore, cloud is considered as one of the sources of noise in optical image data, and its detection and removal need to be operated as a pre-processing step in most remote sensing image processing applications. This thesis investigates the current cloud detection and removal algorithms and develops three new cloud removal methods to improve the accuracy of the results. A thin cloud removal method based on signal transmission principles and spectral mixture analysis (ST-SMA) for pixel correction is developed in the first contribution. This method considers not only the additive reflectance from the clouds but also the energy absorption when solar radiation passes through them. Data correction is achieved by subtracting the product of the cloud endmember signature and the cloud abundance and rescaling according to the cloud thickness. The proposed method has no requirement for meteorological data and does not rely on reference images. The experimental results indicate that the proposed approach is able to perform effective removal of thin clouds in different scenarios. In the second study, an effective cloud removal method is proposed by taking advantage of the noise-adjusted principal components transform (CR-NAPCT). It is found that the signal-to-noise ratio (S/N) of cloud data is higher than data without cloud contamination, when spatial correlation is considered and are shown in the first NAPCT component (NAPC1) in the NAPCT data. An inverse transformation with a modified first component is then applied to generate the cloud free image. The effectiveness of the proposed method is assessed by performing experiments on simulated and real data to compare the quantitative and qualitative performance of the proposed approach. The third study of this thesis deals with both cloud and cloud shadow problems with the aid of an auxiliary image in a clear sky condition. A new cloud removal approach called multitemporal dictionary learning (MDL) is proposed. Dictionaries of the cloudy areas (target data) and the cloud free areas (reference data) are learned separately in the spectral domain. An online dictionary learning method is then applied to obtain the two dictionaries in this method. The removal process is conducted by using the coefficients from the reference image and the dictionary learned from the target image. This method is able to recover the data contaminated by thin and thick clouds or cloud shadows. The experimental results show that the MDL method is effective from both quantitative and qualitative viewpoints

    Gap-filling using machine learning : implementations and applications in remote sensing

    Get PDF
    Gap-filling is an important preprocessing step in remote sensing applications because it enables successful sensor-based studies by greatly recovering the Earth’s surface records lost due to sensor failures and cloud cover. To date, a great number of methods have been proposed to reconstruct missing data in remote sensing images, but methods that deliver satisfactory performance in handling large-area gaps over heterogeneous landscapes are scant. To address this problem, this thesis proposes two methods—Missing Observations Prediction based on Spectral-Temporal Metrics (MOPSTM) and Spectral and Temporal Information for Missing Data Reconstruction (STIMDR)—that are capable of recovering small and large-area gaps in Landsat time series. Machine learning algorithms are used to implement MOPSTM and STIMDR. MOPSTM applies the k-Nearest Neighbors (k-NN) regression to the target image (i.e. image that is to be reconstructed) and spectral-temporal metrics (STMs, e.g. statistical quantiles) derived from a 1-year Landsat time series. Improved from MOPSTM, STIMDR achieves more powerful performance by employing an effective mechanic that excludes dissimilar data in a longer time series (e.g., changes in land cover). The proposed methods are compared site-to-site with six state-of-the-art gap-filling methods including three temporal interpolation methods and three hybrid methods. With higher accuracy in four study sites located in Kenya, Finland, Germany, and China, MOPSTM and STIMDR have indicated more robust performance than other methods, with STIMDR yielding higher accuracy than MOPSTM. Although gap-filling methods are proposed with increasing frequency, their necessity and effects are rarely evaluated, so this has become an unsolved research gap. This thesis addresses this research gap using land use and land cover (LULC) classification and tree canopy cover (TCC) modelling with the assistance of machine learning algorithms. Random forest algorithm is used to examine whether gap-filled images outperform non-gap-filled (or actual) images in LULC and TCC applications. The results indicate that (i) gap-filled images achieve no worse performance in LULC classification than the actual image, and (ii) gap-filled predictors derived from the Landsat time series deliver better performance on average than non-gap-filled predictors in TCC modelling. Therefore, we conclude that gap-filling has positive effects on LULC classification and TCC modelling, which justifies its inclusion in image preprocessing workflows.-

    Satellite-based phenology analysis in evaluating the response of Puerto Rico and the United States Virgin Islands\u27 tropical forests to the 2017 hurricanes

    Get PDF
    The functionality of tropical forest ecosystems and their productivity is highly related to the timing of phenological events. Understanding forest responses to major climate events is crucial for predicting the potential impacts of climate change. This research utilized Landsat satellite data and ground-based Forest Inventory and Analysis (FIA) plot data to investigate the dynamics of Puerto Rico and the U.S. Virgin Islands’ (PRVI) tropical forests after two major hurricanes in 2017. Analyzing these two datasets allowed for validation of the remote sensing methodology with field data and for the investigation of whether this is an appropriate approach for estimating forest health in areas lacking in-situ data. I performed extensive cloud masking processes on the satellite imagery to produce masked, repaired, near cloud-free imagery, which were used to extract phenology metrics and produce annual phenology curves. FIA data was used to estimate forest percent mortality and change in aboveground live biomass (AGLB). Simple and multiple linear regression were used to explore the relationship between the FIA data and the remote sensing derived phenology metrics to analyze and compare trends. Phenology metrics showed a consistent trend of an initial decrease in index values the first year after the hurricanes, followed by a spike in values the second year after. Consistent trends were seen after the hurricanes of a decrease in AGLB, an increase in mortality, and a decrease in phenology values the first year, followed by increase in values the second year after. Significant changes were found in AGLB and in the phenology metrics before and after the hurricanes, however there were no significant linear relationships found between the FIA data and the remote sensing data. Meaningful phenology curves were successfully generated when analyzing a small region with only one forest type and no data gaps. The results, therefore, help in constructing a base understanding of PRVI’s tropical forests dynamic relative to climate change and give a clearer indication of the capabilities of the remotely sensed data. Furthermore, this research demonstrated approaches and techniques that can be further applied to larger, global sustainability goals to sustain living systems in times of climate variability and change

    Deep internal learning for inpainting of cloud-affected regions in satellite imagery

    Get PDF
    Cloud cover remains a significant limitation to a broad range of applications relying on optical remote sensing imagery, including crop identification/yield prediction, climate monitoring, and land cover classification. A common approach to cloud removal treats the problem as an inpainting task and imputes optical data in the cloud-affected regions employing either mosaicing historical data or making use of sensing modalities not impacted by cloud obstructions, such as SAR. Recently, deep learning approaches have been explored in these applications; however, the majority of reported solutions rely on external learning practices, i.e., models trained on fixed datasets. Although these models perform well within the context of a particular dataset, a significant risk of spatial and temporal overfitting exists when applied in different locations or at different times. Here, cloud removal was implemented within an internal learning regime through an inpainting technique based on the deep image prior. The approach was evaluated on both a synthetic dataset with an exact ground truth, as well as real samples. The ability to inpaint the cloud-affected regions for varying weather conditions across a whole year with no prior training was demonstrated, and the performance of the approach was characterised

    Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks

    Get PDF
    Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research
    corecore