248 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Intersensor Remote Sensing Image Registration Using Multispectral Semantic Embeddings

    Get PDF
    This letter presents a novel intersensor registration framework specially designed to register Sentinel-3 (S3) operational data using the Sentinel-2 (S2) instrument as a reference. The substantially higher resolution of the Multispectral Instrument (MSI), on-board S2, with respect to the Ocean and Land Color Instrument (OLCI), carried by S3, makes the former sensor a suitable spatial reference to finely adjust OLCI products. Nonetheless, the important spectral-spatial differences between both instruments may constrain traditional registration mechanisms to effectively align data of such different nature. In this context, the proposed registration scheme advocates the use of a topic model-based embedding approach to conduct the intersensor registration task within a common multispectral semantic space, where the input imagery is represented according to their corresponding spectral feature patterns instead of the low-level attributes. Thus, the OLCI products can be effectively registered to the MSI reference data by aligning those hidden patterns that fundamentally express the same visual concepts across the sensors. The experiments, conducted over four different S2 and S3 operational data collections, reveal that the proposed approach provides performance advantages over six different intersensor registration counterparts

    Geo-rectification and cloud-cover correction of multi-temporal Earth observation imagery

    Get PDF
    Over the past decades, improvements in remote sensing technology have led to mass proliferation of aerial imagery. This, in turn, opened vast new possibilities relating to land cover classification, cartography, and so forth. As applications in these fields became increasingly more complex, the amount of data required also rose accordingly and so, to satisfy these new needs, automated systems had to be developed. Geometric distortions in raw imagery must be rectified, otherwise the high accuracy requirements of the newest applications will not be attained. This dissertation proposes an automated solution for the pre-stages of multi-spectral satellite imagery classification, focusing on Fast Fourier Shift theorem based geo-rectification and multi-temporal cloud-cover correction. By automatizing the first stages of image processing, automatic classifiers can take advantage of a larger supply of image data, eventually allowing for the creation of semi-real-time mapping applications

    UAV-Multispectral Sensed Data Band Co-Registration Framework

    Get PDF
    Precision farming has greatly benefited from new technologies over the years. The use of multispectral and hyperspectral sensors coupled to Unmanned Aerial Vehicles (UAV) has enabled farms to monitor crops, improve the use of resources and reduce costs. Despite being widely used, multispectral images present a natural misalignment among the various spectra due to the use of different sensors. The variation of the analyzed spectrum also leads to a loss of characteristics among the bands which hinders the feature detection process among the bands, which makes the alignment process complex. In this work, we propose a new framework for the band co-registration process based on two premises: i) the natural misalignment is an attribute of the camera, so it does not change during the acquisition process; ii) the speed of displacement of the UAV when compared to the speed between the acquisition of the first to the last band, is not sufficient to create significant distortions. We compared our results with the ground-truth generated by a specialist and with other methods present in the literature. The proposed framework had an average back-projection (BP) error of 0.425 pixels, this result being 335% better than the evaluated frameworks.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)A agricultura de precisão se beneficiou muito das novas tecnologias ao longo dos anos. O uso de sensores multiespectrais e hiperespectrais acoplados aos Veículos Aéreos Não Tripulados (VANT) permitiu que as fazendas monitorassem as lavouras, melhorassem o uso de recursos e reduzissem os custos. Apesar de amplamente utilizadas, as imagens multiespectrais apresentam um desalinhamento natural entre os vários espectros devido ao uso de diferentes sensores. A variação do espectro analisado também leva à perda de características entre as bandas, o que dificulta o processo de detecção de atributos entre as bandas, o que torna complexo o processo de alinhamento. Neste trabalho, propomos um novo framework para o processo de alinhamento entre as bandas com base em duas premissas: i) o desalinhamento natural é um atributo da câmera, e por esse motivo ele não é alterado durante o processo de aquisição; ii) a velocidade de deslocamento do VANT, quando comparada à velocidade entre a aquisição da primeira e a última banda, não é suficiente para criar distorções significativas. Os resultados obtidos foram comparados com o padrão ouro gerado por um especialista e com outros métodos presentes na literatura. O framework proposto teve um back-projection error (BP) de 0, 425 pixels, sendo este resultado 335% melhor aos frameworks avaliados

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    A deep semantic vegetation health monitoring platform for citizen science imaging data

    Get PDF
    Automated monitoring of vegetation health in a landscape is often attributed to calculating values of various vegetation indexes over a period of time. However, such approaches suffer from an inaccurate estimation of vegetational change due to the over-reliance of index values on vegetation’s colour attributes and the availability of multi-spectral bands. One common observation is the sensitivity of colour attributes to seasonal variations and imaging devices, thus leading to false and inaccurate change detection and monitoring. In addition, these are very strong assumptions in a citizen science project. In this article, we build upon our previous work on developing a Semantic Vegetation Index (SVI) and expand it to introduce a semantic vegetation health monitoring platform to monitor vegetation health in a large landscape. However, unlike our previous work, we use RGB images of the Australian landscape for a quarterly series of images over six years (2015–2020). This Semantic Vegetation Index (SVI) is based on deep semantic segmentation to integrate it with a citizen science project (Fluker Post) for automated environmental monitoring. It has collected thousands of vegetation images shared by various visitors from around 168 different points located in Australian regions over six years. This paper first uses a deep learning-based semantic segmentation model to classify vegetation in repeated photographs. A semantic vegetation index is then calculated and plotted in a time series to reflect seasonal variations and environmental impacts. The results show variational trends of vegetation cover for each year, and the semantic segmentation model performed well in calculating vegetation cover based on semantic pixels (overall accuracy = 97.7%). This work has solved a number of problems related to changes in viewpoint, scale, zoom, and seasonal changes in order to normalise RGB image data collected from different image devices

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
    corecore