760 research outputs found

    Tropical Cyclone Intensity Estimation Using Multi-Dimensional Convolutional Neural Networks from Geostationary Satellite Data

    Get PDF
    For a long time, researchers have tried to find a way to analyze tropical cyclone (TC) intensity in real-time. Since there is no standardized method for estimating TC intensity and the most widely used method is a manual algorithm using satellite-based cloud images, there is a bias that varies depending on the TC center and shape. In this study, we adopted convolutional neural networks (CNNs) which are part of a state-of-art approach that analyzes image patterns to estimate TC intensity by mimicking human cloud pattern recognition. Both two dimensional-CNN (2D-CNN) and three-dimensional-CNN (3D-CNN) were used to analyze the relationship between multi-spectral geostationary satellite images and TC intensity. Our best-optimized model produced a root mean squared error (RMSE) of 8.32 kts, resulting in better performance (~35%) than the existing model using the CNN-based approach with a single channel image. Moreover, we analyzed the characteristics of multi-spectral satellite-based TC images according to intensity using a heat map, which is one of the visualization means of CNNs. It shows that the stronger the intensity of the TC, the greater the influence of the TC center in the lower atmosphere. This is consistent with the results from the existing TC initialization method with numerical simulations based on dynamical TC models. Our study suggests the possibility that a deep learning approach can be used to interpret the behavior characteristics of TCs

    Hybrid Neural Networks with Attention-based Multiple Instance Learning for Improved Grain Identification and Grain Yield Predictions

    Get PDF
    Agriculture is a critical part of the world's food production, being a vital aspect of all societies. Procedures need to be adjusted to their specific environment because of their climate and field condition disparity. Existing research has demonstrated the potential of grain yield predictions on Norwegian farms. However, this research is limited to regional analytics, which is unable to acquire sufficient plant growth factors influenced by field conditions and farmers' decisions. One factor critical for yield prediction is the crop type planted on a per-field basis. This research effort proposes a novel approach for improving crop yield predictions using a hybrid deep neural network utilizing temporal satellite imagery from a remote sensing system. Additionally, We apply a variety of data, including grain production, meteorological data, and geographical data. The crop yield prediction system is supported by a field-based crop type classification model, which supplies features related to crop type and field area. Our crop classification system takes advantage of both raw satellite images as well as carefully chosen vegetation indices. Further, we propose a multi-class attention-based deep multiple instance learning model to utilize semi-labeled datasets, fully benefiting Norwegian data acquisition. Our best crop classification model, which consists of a time distributed network and a gated recurrent unit, classifies crop types with an accuracy of 70\% and is currently state-of-the-art for country-wide crop type mapping in Norway. Lastly, our yield prediction system enables realistic in-season early predictions that could benefit actors in real-life scenarios

    Retrogressive Thaw Slump identification using U-Net and Satellite Image Inputs - Remote Sensing Imagery Segmentation using Deep Learning techniques

    Get PDF
    Global warming has been a topic of discussion for many decades, however its impact on the thaw of permafrost and vice-versa has not been very well captured or documented in the past. This may be due to most permafrost being in the Arctic and similarly vast remote areas, which makes data collection difficult and costly. A partial solution to this problem is the use of Remote Sensing imagery, which has been widely used for decades in documenting the changes in permafrost regions. Despite its many benefits, this methodology still required a manual assessment of images, which could be a slow and laborious task for researchers. Over the last decade, the growth of Deep Learning has helped address these limitations. The use of Deep Learning on Remote Sensing imagery has risen in popularity, mainly due to the increased availability and scale of Remote Sensing data. This has been fuelled in the last few years by open-source multi-spectral high spatial resolution data, such as the Sentinel-2 data used in this project. Notwithstanding the growth of Deep Learning for Remote Sensing Imagery, its use for the particular case of identifying the thaw of permafrost, addressed in this project, has not been widely studied. To address this gap, the semantic segmentation model proposed in this project performs pixel-wise classification on the satellite images for the identification of Retrogressive Thaw Slumps (RTSs), using a U-Net architecture. In this project, the successful identification of RTSs using Satellite Images is achieved with an average of 95% Dice score for the 39 test images evaluated, concluding that it is possible to pre-process said images and achieve satisfactory results using 10-meter spatial resolution and as little as 4 spectral bands. Since these landforms can be a proxy for the thaw of permafrost, the aim is that this project can help make progress towards the mitigation of the impact of such a powerful geophysical phenomenon

    Retrogressive thaw slump identification using U-Net and satellite image inputs: remote sensing imagery segmentation usingdeep learning techniques

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceGlobal warming has been a topic of discussion for many decades, however its impact on the thaw of permafrost and vice-versa has not been very well captured or documented in the past. This may be due to most permafrost being in the Arctic and similarly vast remote areas, which makes data collection difficult and costly. A partial solution to this problem is the use of Remote Sensing imagery, which has been widely used for decades in documenting the changes in permafrost regions. Despite its many benefits, this methodology still required a manual assessment of images, which could be a slow and laborious task for researchers. Over the last decade, the growth of Deep Learning has helped address these limitations. The use of Deep Learning on Remote Sensing imagery has risen in popularity, mainly due to the increased availability and scale of Remote Sensing data. This has been fuelled in the last few years by open-source multi-spectral high spatial resolution data, such as the Sentinel-2 data used in this project. Notwithstanding the growth of Deep Learning for Remote Sensing Imagery, its use for the particular case of identifying the thaw of permafrost, addressed in this project, has not been widely studied. To address this gap, the semantic segmentation model proposed in this project performs pixel-wise classification on the satellite images for the identification of Retrogressive Thaw Slumps (RTSs), using a U-Net architecture. In this project, the successful identification of RTSs using Satellite Images is achieved with an average of 95% Dice score for the 39 test images evaluated, concluding that it is possible to pre-process said images and achieve satisfactory results using 10-meter spatial resolution and as little as 4 spectral bands. Since these landforms can be a proxy for the thaw of permafrost, the aim is that this project can help make progress towards the mitigation of the impact of such a powerful geophysical phenomenon.O aquecimento global tem sido tópico de discussão nas últimas décadas. Apesar deste debate, o impacto do aquecimento global no degelo do pergelissolo e vice-versa não está amplamente estudado nem documentado. Uma das causas que pode ter levado a esta escassez de estudos é o facto do pergelissolo se encontrar no Ártico ou em regiões igualmente remotas e inacessíveis, o que faz com que a recolha de dados seja difícil e com custos elevados. Uma das soluções parciais para este problema, usada há várias décadas, é a recolha de imagens de satélite para estudar as mudanças nas regiões de pergelissolo. Apesar dos inúmeros benefícios, esta técnica requer uma análise detalhada das imagens adquiridas, o que, por conseguinte, se traduz num processo exaustivo e demorado quando é feito manualmente por cientistas. Ao longo das últimas décadas, o crescimento de “Deep Learning” propõe resolver estas limitações. O uso desta ferramenta para a análise de imagens de satélite tem crescido em popularidade, em particular devido ao aumento da quantidade e disponibilidade de dados. Este aumento de dados tem sido sustentado em grande parte pela disponibilização, na modalidade de “open-source” de dados de sensores multiespectrais de alta resolução espacial, como aqueles usados neste projeto, provenientes da missão “Sentinel-2”. No entanto, apesar de um crescimento do uso de “Deep Learning” na análise de imagens de satélite a sua aplicação concreta especificamente na análise do degelo do pergelissolo, abordada neste projeto, não tem sido amplamente estudado. Para abordar esta lacuna, o modelo de “semantic segmentation” proposto neste projeto, classifica cada pixel nas imagens de satélite para identificar "Retrogressive Thaw Slumps (RTSs)”, usando a arquitetura “U-Net”. Neste projeto, a identificação de RTSs usando imagens de satélite é bem sucedida, conseguindo um “Dice Score” médio de 95%, nas 39 imagens de teste analisadas. Este resultado levou a conclusão que é possível processar imagens de satélite e atingir resultados satisfatórios usando imagens com 10 metros de resolução espacial e apenas 4 bandas espectrais. Como estas formas de relevo são uma boa indicação do degelo do Pergelissolo, a esperança é que este projeto possa ajudar na mitigação do impacto deste poderoso fenómeno geofísico

    The Canadian Cropland Dataset: A New Land Cover Dataset for Multitemporal Deep Learning Classification in Agriculture

    Full text link
    Monitoring land cover using remote sensing is vital for studying environmental changes and ensuring global food security through crop yield forecasting. Specifically, multitemporal remote sensing imagery provides relevant information about the dynamics of a scene, which has proven to lead to better land cover classification results. Nevertheless, few studies have benefited from high spatial and temporal resolution data due to the difficulty of accessing reliable, fine-grained and high-quality annotated samples to support their hypotheses. Therefore, we introduce a temporal patch-based dataset of Canadian croplands, enriched with labels retrieved from the Canadian Annual Crop Inventory. The dataset contains 78,536 manually verified and curated high-resolution (10 m/pixel, 640 x 640 m) geo-referenced images from 10 crop classes collected over four crop production years (2017-2020) and five months (June-October). Each instance contains 12 spectral bands, an RGB image, and additional vegetation index bands. Individually, each category contains at least 4,800 images. Moreover, as a benchmark, we provide models and source code that allow a user to predict the crop class using a single image (ResNet, DenseNet, EfficientNet) or a sequence of images (LRCN, 3D-CNN) from the same location. In perspective, we expect this evolving dataset to propel the creation of robust agro-environmental models that can accelerate the comprehension of complex agricultural regions by providing accurate and continuous monitoring of land cover.Comment: 24 pages, 5 figures, dataset descripto

    Accurate dense depth from light field technology for object segmentation and 3D computer vision

    Get PDF

    Extracting surface water bodies from sentinel-2 imagery using convolutional neural networks

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesWater is an integral part of eco-system with significant role in human life. It is immensely mobilized natural resource and hence it should be monitored continuously. Water features extracted from satellite images can be utilized for urban planning, disaster management, geospatial dataset update and similar other applications. In this research, surface water features from Sentinel-2 (S2) images were extracted using state-of-the-art approaches of deep learning. Performance of three proposed networks from different research were assessed along with baseline model. In addition, two existing but novel architects of Convolutional Neural Network (CNN) namely; Densely Convolutional Network (DenseNet) and Residual Attention Network (AttResNet) were also implemented to make comparative study of all the networks. Then dense blocks, transition blocks, attention block and residual block were integrated to propose a novel network for water bodies extraction. Talking about existing networks, our experiments suggested that DenseNet was the best network among them with highest test accuracy and recall values for water and non water across all the experimented patch sizes. DenseNet achieved the test accuracy of 89.73% with recall values 85 and 92 for water and non water respectively at the patch size of 16. Then our proposed network surpassed the performance of DenseNet by reaching the test accuracy of 90.29% and recall values 86 and 93 for water and non water respectively. Moreover, our experiments verified that neural network were better than index-based approaches since the index-based approaches did not perform well to extract riverbanks, small water bodies and dried rivers. Qualitative analysis seconded the findings of quantitative analysis. It was found that the proposed network was successful in creating attention aware features of water pixels and diminishing urban, barren and non water pixels. All in all, it was concluded that the objectives of the research were met successfully with the successful proposition of a new network

    mm-Pose: Real-Time Human Skeletal Posture Estimation using mmWave Radars and CNNs

    Full text link
    In this paper, mm-Pose, a novel approach to detect and track human skeletons in real-time using an mmWave radar, is proposed. To the best of the authors' knowledge, this is the first method to detect >15 distinct skeletal joints using mmWave radar reflection signals. The proposed method would find several applications in traffic monitoring systems, autonomous vehicles, patient monitoring systems and defense forces to detect and track human skeleton for effective and preventive decision making in real-time. The use of radar makes the system operationally robust to scene lighting and adverse weather conditions. The reflected radar point cloud in range, azimuth and elevation are first resolved and projected in Range-Azimuth and Range-Elevation planes. A novel low-size high-resolution radar-to-image representation is also presented, that overcomes the sparsity in traditional point cloud data and offers significant reduction in the subsequent machine learning architecture. The RGB channels were assigned with the normalized values of range, elevation/azimuth and the power level of the reflection signals for each of the points. A forked CNN architecture was used to predict the real-world position of the skeletal joints in 3-D space, using the radar-to-image representation. The proposed method was tested for a single human scenario for four primary motions, (i) Walking, (ii) Swinging left arm, (iii) Swinging right arm, and (iv) Swinging both arms to validate accurate predictions for motion in range, azimuth and elevation. The detailed methodology, implementation, challenges, and validation results are presented.Comment: Submitted to IEEE Sensors Journa

    Extracting surface water bodies from Sentinel-2 imaginery using convolutional neural networks

    Get PDF
    Treball de Final de Màster Universitari Erasmus Mundus en Tecnologia Geoespacial (Pla de 2013). Codi: SIW013. Curs acadèmic 2020-2021Water is an integral part of eco-system with significant role in human life. It is immensely mobilized natural resource and hence it should be monitored continuously. Water features extracted from satellite images can be utilized for urban planning, disaster management, geospatial dataset update and similar other applications. In this research, surface water features from Sentinel-2 (S2) images were extracted using state-of-the-art approaches of deep learning. Performance of three proposed networks from different research were assessed along with baseline model. In addition, two existing but novel architects of Convolutional Neural Network (CNN) namely; Densely Convolutional Network (DenseNet) and Residual Attention Network (AttResNet) were also implemented to make comparative study of all the networks. Then dense blocks, transition blocks, attention block and residual block were integrated to propose a novel network for water bodies extraction. Talking about existing networks, our experiments suggested that DenseNet was the best network among them with highest test accuracy and recall values for water and non water across all the experimented patch sizes. DenseNet achieved the test accuracy of 89.73% with recall values 85 and 92 for water and non water respectively at the patch size of 16. Then our proposed network surpassed the performance of DenseNet by reaching the test accuracy of 90.29% and recall values 86 and 93 for water and non water respectively. Moreover, our experiments verified that neural network were better than index-based approaches since the index-based approaches did not perform well to extract riverbanks, small water bodies and dried rivers. Qualitative analysis seconded the findings of quantitative analysis. It was found that the proposed network was successful in creating attention aware features of water pixels and diminishing urban, barren and non water pixels. All in all, it was concluded that the objectives of the research were met successfully with the successful proposition of a new network
    corecore