817 research outputs found

    A robust nonlinear scale space change detection approach for SAR images

    Get PDF
    In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling (FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction and shape detail preservation. MSERs of each scale space image are found and then combined through a decision level fusion strategy, namely "selective scale fusion" (SSF), where contrast and boundary curvature of each MSER are considered. The performance of the proposed method is evaluated using real multitemporal high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change. One of the main outcomes of this approach is that different objects having different sizes and levels of contrast with their surroundings appear as stable regions at different scale space images thus the fusion of results from scale space images yields a good overall performance

    Semantic Segmentation of Remote-Sensing Images Through Fully Convolutional Neural Networks and Hierarchical Probabilistic Graphical Models

    Get PDF
    Deep learning (DL) is currently the dominant approach to image classification and segmentation, but the performances of DL methods are remarkably influenced by the quantity and quality of the ground truth (GT) used for training. In this article, a DL method is presented to deal with the semantic segmentation of very-high-resolution (VHR) remote-sensing data in the case of scarce GT. The main idea is to combine a specific type of deep convolutional neural networks (CNNs), namely fully convolutional networks (FCNs), with probabilistic graphical models (PGMs). Our method takes advantage of the intrinsic multiscale behavior of FCNs to deal with multiscale data representations and to connect them to a hierarchical Markov model (e.g., making use of a quadtree). As a consequence, the spatial information present in the data is better exploited, allowing a reduced sensitivity to GT incompleteness to be obtained. The marginal posterior mode (MPM) criterion is used for inference in the proposed framework. To assess the capabilities of the proposed method, the experimental validation is conducted with the ISPRS 2D Semantic Labeling Challenge datasets on the cities of Vaihingen and Potsdam, with some modifications to simulate the spatially sparse GTs that are common in real remote-sensing applications. The results are quite significant, as the proposed approach exhibits a higher producer accuracy than the standard FCNs considered and especially mitigates the impact of scarce GTs on minority classes and small spatial details

    HST-MRF: Heterogeneous Swin Transformer with Multi-Receptive Field for Medical Image Segmentation

    Full text link
    The Transformer has been successfully used in medical image segmentation due to its excellent long-range modeling capabilities. However, patch segmentation is necessary when building a Transformer class model. This process may disrupt the tissue structure in medical images, resulting in the loss of relevant information. In this study, we proposed a Heterogeneous Swin Transformer with Multi-Receptive Field (HST-MRF) model based on U-shaped networks for medical image segmentation. The main purpose is to solve the problem of loss of structural information caused by patch segmentation using transformer by fusing patch information under different receptive fields. The heterogeneous Swin Transformer (HST) is the core module, which achieves the interaction of multi-receptive field patch information through heterogeneous attention and passes it to the next stage for progressive learning. We also designed a two-stage fusion module, multimodal bilinear pooling (MBP), to assist HST in further fusing multi-receptive field information and combining low-level and high-level semantic information for accurate localization of lesion regions. In addition, we developed adaptive patch embedding (APE) and soft channel attention (SCA) modules to retain more valuable information when acquiring patch embedding and filtering channel features, respectively, thereby improving model segmentation quality. We evaluated HST-MRF on multiple datasets for polyp and skin lesion segmentation tasks. Experimental results show that our proposed method outperforms state-of-the-art models and can achieve superior performance. Furthermore, we verified the effectiveness of each module and the benefits of multi-receptive field segmentation in reducing the loss of structural information through ablation experiments

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    A CAUSAL HIERARCHICAL MARKOV FRAMEWORK FOR THE CLASSIFICATION OF MULTIRESOLUTION AND MULTISENSOR REMOTE SENSING IMAGES

    Get PDF
    Abstract. In this paper, a multiscale Markov framework is proposed in order to address the problem of the classification of multiresolution and multisensor remotely sensed data. The proposed framework makes use of a quadtree to model the interactions across different spatial resolutions and a Markov model with respect to a generic total order relation to deal with contextual information at each scale in order to favor applicability to very high resolution imagery. The methodological properties of the proposed hierarchical framework are investigated. Firstly, we prove the causality of the overall proposed model, a particularly advantageous property in terms of computational cost of the inference. Secondly, we prove the expression of the marginal posterior mode criterion for inference on the proposed framework. Within this framework, a specific algorithm is formulated by defining, within each layer of the quadtree, a Markov chain model with respect to a pixel scan that combines both a zig-zag trajectory and a Hilbert space-filling curve. Data collected by distinct sensors at the same spatial resolution are fused through gradient boosted regression trees. The developed algorithm was experimentally validated with two very high resolution datasets including multispectral, panchromatic and radar satellite images. The experimental results confirm the effectiveness of the proposed algorithm as compared to previous techniques based on alternate approaches to multiresolution fusion

    Object Contour and Edge Detection with RefineContourNet

    Full text link
    A ResNet-based multi-path refinement CNN is used for object contour detection. For this task, we prioritise the effective utilization of the high-level abstraction capability of a ResNet, which leads to state-of-the-art results for edge detection. Keeping our focus in mind, we fuse the high, mid and low-level features in that specific order, which differs from many other approaches. It uses the tensor with the highest-levelled features as the starting point to combine it layer-by-layer with features of a lower abstraction level until it reaches the lowest level. We train this network on a modified PASCAL VOC 2012 dataset for object contour detection and evaluate on a refined PASCAL-val dataset reaching an excellent performance and an Optimal Dataset Scale (ODS) of 0.752. Furthermore, by fine-training on the BSDS500 dataset we reach state-of-the-art results for edge-detection with an ODS of 0.824.Comment: Keywords: Object Contour Detection, Edge Detection, Multi-Path Refinement CN
    • 

    corecore