7,993 research outputs found

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Automatic and semi-automatic extraction of curvilinear features from SAR images

    Get PDF
    Extraction of curvilinear features from synthetic aperture radar (SAR) images is important for automatic recognition of various targets, such as fences, surrounding the buildings. The bright pixels which constitute curvilinear features in SAR images are usually disrupted and also degraded by high amount of speckle noise which makes extraction of such curvilinear features very difficult. In this paper an approach for the extraction of curvilinear features from SAR images is presented. The proposed approach is based on searching the curvilinear features as an optimum unidirectional path crossing over the vertices of the features determined after a despeckling operation. The proposed method can be used in a semi-automatic mode if the user supplies the starting vertex or in an automatic mode otherwise. In the semi-automatic mode, the proposed method produces reasonably accurate real-time solutions for SAR images

    A Self-Organizing System for Classifying Complex Images: Natural Textures and Synthetic Aperture Radar

    Full text link
    A self-organizing architecture is developed for image region classification. The system consists of a preprocessor that utilizes multi-scale filtering, competition, cooperation, and diffusion to compute a vector of image boundary and surface properties, notably texture and brightness properties. This vector inputs to a system that incrementally learns noisy multidimensional mappings and their probabilities. The architecture is applied to difficult real-world image classification problems, including classification of synthetic aperture radar and natural texture images, and outperforms a recent state-of-the-art system at classifying natural texturns.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100); Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334); National Science Foundation (IRI-90-00530, IRI-90-24877

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    A Self-Organizing System for Classifying Complex Images: Natural Textures and Synthetic Aperture Radar

    Full text link
    A self-organizing architecture is developed for image region classification. The system consists of a preprocessor that utilizes multi-scale filtering, competition, cooperation, and diffusion to compute a vector of image boundary and surface properties, notably texture and brightness properties. This vector inputs to a system that incrementally learns noisy multidimensional mappings and their probabilities. The architecture is applied to difficult real-world image classification problems, including classification of synthetic aperture radar and natural texture images, and outperforms a recent state-of-the-art system at classifying natural texturns.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100); Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334); National Science Foundation (IRI-90-00530, IRI-90-24877

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Self-Organizing Neural System for Learning to Recognize Textured Scenes

    Full text link
    A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well a.s abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is benchmarked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    NEFI: Network Extraction From Images

    Full text link
    Networks and network-like structures are amongst the central building blocks of many technological and biological systems. Given a mathematical graph representation of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to graphs describing large scale networks such as social networks, protein-interaction networks, etc. In these applications, graph acquisition, i.e., the extraction of a mathematical graph from a network, is relatively simple. However, for many network-like structures, e.g. leaf venations, slime molds and mud cracks, data collection relies on images where graph extraction requires domain-specific solutions or even manual. Here we introduce Network Extraction From Images, NEFI, a software tool that automatically extracts accurate graphs from images of a wide range of networks originating in various domains. While there is previous work on graph extraction from images, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners from many disciplines to easily extract graph representations from images by supplying flexible tools from image processing, computer vision and graph theory bundled in a convenient package. Thus, NEFI constitutes a scalable alternative to tedious and error-prone manual graph extraction and special purpose tools. We anticipate NEFI to enable the collection of larger datasets by reducing the time spent on graph extraction. The analysis of these new datasets may open up the possibility to gain new insights into the structure and function of various types of networks. NEFI is open source and available http://nefi.mpi-inf.mpg.de
    • 

    corecore