10,531 research outputs found

    SC-Fuse: A Feature Fusion Approach for Unpaved Road Detection from Remotely Sensed Images

    Get PDF
    Road network extraction from remote sensing imagery is crucial for numerous applications, ranging from autonomous navigation to urban and rural planning. A particularly challenging aspect is the detection of unpaved roads, often underrepresented in research and data. These roads display variability in texture, width, shape, and surroundings, making their detection quite complex. This thesis addresses these challenges by creating a specialized dataset and introducing the SC-Fuse model. Our custom dataset comprises high resolution remote sensing imagery which primarily targets unpaved roads of the American Midwest. To capture the diverse seasonal variation and their impact, the dataset includes images from different times of the year, capturing various weather conditions and offering a comprehensive view of these changing conditions. To detect roads from our custom dataset we developed SC-Fuse model, a novel deep learning architecture designed to extract unpaved road networks from satellite imagery. This model leverages the strengths of dual feature extractors: the Swin Transformer and a Residual CNN. By combining features from these, SC-fuse captures the local as well as the global context of the images. The fusion of these features is done by a Feature Fusion Module which uses Linear Attention Mechanism, to optimize the computational efficiency. A LinkNet based decoder is used to ensure precise road network reconstruction. The evaluation of SC-Fuse model is done using various metrics, including qualitative visual assessments, to test its effectiveness in unpaved road detection. Advisors: Ashok Samal and Cody Stoll

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN
    • …
    corecore