2,617 research outputs found

    Automatic and semi-automatic extraction of curvilinear features from SAR images

    Get PDF
    Extraction of curvilinear features from synthetic aperture radar (SAR) images is important for automatic recognition of various targets, such as fences, surrounding the buildings. The bright pixels which constitute curvilinear features in SAR images are usually disrupted and also degraded by high amount of speckle noise which makes extraction of such curvilinear features very difficult. In this paper an approach for the extraction of curvilinear features from SAR images is presented. The proposed approach is based on searching the curvilinear features as an optimum unidirectional path crossing over the vertices of the features determined after a despeckling operation. The proposed method can be used in a semi-automatic mode if the user supplies the starting vertex or in an automatic mode otherwise. In the semi-automatic mode, the proposed method produces reasonably accurate real-time solutions for SAR images

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Comparison of land-cover classification methods in the Brazilian Amazon Basin.

    Get PDF
    Numerous classifiers have been developed and different classifiers have their own characteristics. Controversial results often occurred depending on the landscape complexity of the study area and the data used. Therefore, this paper aims to find a suitable classifier for the tropical land cover classification. Five classifiers ? minimum distance classifier (MDC), maximum likelihood classifier (MLC), fisher linear discriminant (FLD), extraction and classification of homogeneous objects (ECHO), and linear spectral mixture analysis (LSMA) ? were tested using Landsat Thematic Mapper (TM) data in the Amazon basin using the same training sample data sets. Seven land cover classes ? mature forest, advanced succession forest, initial secondary succession forest, pasture, agricultural lands, bare lands, and water ? were classified. Overall classification accuracy and kappa analysis were calculated. The results indicate that LSMA and ECHO classifiers provided better classification accuracies than the MDC, MLC, and FLD in the moist tropical region. The overall accuracy of LSMA approach reaches 86% associated with 0.82 kappa coefficien

    Edge Enhancement from Low-Light Image by Convolutional Neural Network and Sigmoid Function

    Get PDF
    Due to camera resolution or any lighting condition, captured image are generally over-exposed or under-exposed conditions. So, there is need of some enhancement techniques that improvise these artifacts from recorded pictures or images. So, the objective of image enhancement and adjustment techniques is to improve the quality and characteristics of an image. In general terms, the enhancement of image distorts the original numerical values of an image. Therefore, it is required to design such enhancement technique that do not compromise with the quality of the image. The optimization of the image extracts the characteristics of the image instead of restoring the degraded image. The improvement of the image involves the degraded image processing and the improvement of its visual aspect. A lot of research has been done to improve the image. Many research works have been done in this field. One among them is deep learning. Most of the existing contrast enhancement methods, adjust the tone curve to correct the contrast of an input image but doesn’t work efficiently due to limited amount of information contained in a single image. In this research, the CNN with edge adjustment is proposed. By applying CNN with Edge adjustment technique, the input low contrast images are capable to adapt according to high quality enhancement. The result analysis shows that the developed technique significantly advantages over existing methods

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Automatic learning of structural knowledge from geographic information for updating land cover maps

    Get PDF
    International audienceThe number of satellites and remote sensing sensors devoted to earth observation becomes increasingly high, providing more and more data and especially images. In the same time the access to such a data and to the tools to process them has been considerably improved. In the presence of such data flow - and regarding the necessity to follow up and predict environmental and societal changes in highly dynamic socio-environmental contexts - we need automatic image interpretation methods. This could be accomplished by exploring some strengths of artificial intelligence. Our main idea consists in inducing classification rules that explicitly take into account structural knowledge, using Aleph, an Inductive Logic Programming (ILP) system. We applied our proposed methodology to three land cover/use maps of the French Guiana littoral. One hundred and forty six classification rules were induced for the 39 land-cover classes of the maps. These rules are expressed in first order logic language which make them intelligible and interpretable by non-experts. A ten-fold cross validation gave average values for classification accuracy, specificity and sensibility equal to, respectively, 98.82 %, 99.65% and 70%. The proposed methodology could be valuably exploited to automatically classify new objects and/or help operators using object-based classification procedures

    Determining class proportions within a pixel using a new mixed-label analysis method

    Get PDF
    Land-cover classification is perhaps one of the most important applications of remote-sensing data. There are limitations with conventional (hard) classification methods because mixed pixels are often abundant in remote-sensing images, and they cannot be appropriately or accurately classified by these methods. This paper presents a new approach in improving the classification performance of remote-sensing applications based on mixed-label analysis (MLA). This MLA model can determine class proportions within a pixel in producing soft classification from remote-sensing data. Simulated images and real data sets are used to illustrate the simplicity and effectiveness of this proposed approach. Classification accuracy achieved by MLA is compared with other conventional methods such as linear spectral mixture models, maximum likelihood, minimum distance, and artificial neural networks. Experiments have demonstrated that this new method can generate more accurate land-cover maps, even in the presence of uncertainties in the form of mixed pixels.published_or_final_versio

    OBIA for combining LiDAR and multispectral data to characterize forested areas and land cover in a tropical region

    Get PDF
    International audiencePrioritizing and designing forest restoration strategies requires an adequate survey to inform on the status (degraded or not) of forest types and the human disturbances over a territory. Very High Spatial Resolution (VHSR) remotely sensed data offers valuable information for performing such survey. We present in this study an OBIA methodology for mapping forest types at risk and land cover in a tropical context (Mayotte Island) combining LiDAR data (1 m pixel), VHSR multispectral images (Spot 5 XS 10 m pixel and orthophotos 0.5 m pixel) and ancillary data (existing thematic information). A Digital Canopy Model (DCM) was derived from LiDAR data and additional information was built from the DCM in order to better take into account the horizontal variability of canopy height: max and high Pass filters (3m x 3m kernel size) and Haralick variance texture image (51m x 51m kernel size). OBIA emerges as a suitable framework for exploiting multisource information during segmentation as well as during the classification process. A precise map (84% total accuracy) was obtained informing on (i) surfaces of forest types (defined according to their structure, i.e. canopy height of forest patches for specific type); (ii) degradation (identified in the heterogeneity of canopy height and presence of eroded areas) and (iii) human disturbances. Improvements can be made when discriminating forest types according to their composition (deciduous, evergreen or mixed), in particular by exploiting a more radiometrically homogenous VHSR multispectral image
    • …
    corecore