75 research outputs found

    Segmentation of Oil Spills on Side-Looking Airborne Radar imagery with Autoencoders

    Get PDF
    In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution detection, but few approaches in the literature use SLAR. Our sensor consists of two SAR antennas mounted on an aircraft, enabling a quicker response than satellite sensors for emergency services when an oil spill occurs. Experiments on TERMA radar were carried out to detect oil spills on Spanish coasts using deep selectional autoencoders and RED-nets (very deep Residual Encoder-Decoder Networks). Different configurations of these networks were evaluated and the best topology significantly outperformed previous approaches, correctly detecting 100% of the spills and obtaining an F 1 score of 93.01% at the pixel level. The proposed autoencoders perform accurately in SLAR imagery that has artifacts and noise caused by the aircraft maneuvers, in different weather conditions and with the presence of look-alikes due to natural phenomena such as shoals of fish and seaweed

    Remote Sensing Image Scene Classification: Benchmark and State of the Art

    Full text link
    Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.Comment: This manuscript is the accepted version for Proceedings of the IEE

    Bio-signals compression using auto-encoder

    Get PDF
    Latest developments in wearable devices permits un-damageable and cheapest way for gathering of medical data such as bio-signals like ECG, Respiration, Blood pressure etc. Gathering and analysis of various biomarkers are considered to provide anticipatory healthcare through customized applications for medical purpose. Wearable devices will rely on size, resources and battery capacity; we need a novel algorithm to robustly control memory and the energy of the device. The rapid growth of the technology has led to numerous auto encoders that guarantee the results by extracting feature selection from time and frequency domain in an efficient way. The main aim is to train the hidden layer to reconstruct the data similar to that of input. In the previous works, to accomplish the compression all features were needed but in our proposed framework bio-signals compression using auto-encoder (BCAE) will perform task by taking only important features and compress it. By doing this it can reduce power consumption at the source end and hence increases battery life. The performance of the result comparison is done for the 3 parameters compression ratio, reconstruction error and power consumption. Our proposed work outperforms with respect to the SURF method

    Robust unsupervised small area change detection from SAR imagery using deep learning

    Get PDF
    Small area change detection using synthetic aperture radar (SAR) imagery is a highly challenging task, due to speckle noise and imbalance between classes (changed and unchanged). In this paper, a robust unsupervised approach is proposed for small area change detection using deep learning techniques. First, a multi-scale superpixel reconstruction method is developed to generate a difference image (DI), which can suppress the speckle noise effectively and enhance edges by exploiting local, spatially homogeneous information. Second, a two-stage centre-constrained fuzzy c-means clustering algorithm is proposed to divide the pixels of the DI into changed, unchanged and intermediate classes with a parallel clustering strategy. Image patches belonging to the first two classes are then constructed as pseudo-label training samples, and image patches of the intermediate class are treated as testing samples. Finally, a convolutional wavelet neural network (CWNN) is designed and trained to classify testing samples into changed or unchanged classes, coupled with a deep convolutional generative adversarial network (DCGAN) to increase the number of changed class within the pseudo-label training samples. Numerical experiments on four real SAR datasets demonstrate the validity and robustness of the proposed approach, achieving up to 99.61% accuracy for small area change detection

    Machine Learning for Enhanced Maritime Situation Awareness: Leveraging Historical AIS Data for Ship Trajectory Prediction

    Get PDF
    In this thesis, methods to support high level situation awareness in ship navigators through appropriate automation are investigated. Situation awareness relates to the perception of the environment (level 1), comprehension of the situation (level 2), and projection of future dynamics (level 3). Ship navigators likely conduct mental simulations of future ship traffic (level 3 projections), that facilitate proactive collision avoidance actions. Such actions may include minor speed and/or heading alterations that can prevent future close-encounter situations from arising, enhancing the overall safety of maritime operations. Currently, there is limited automation support for level 3 projections, where the most common approaches utilize linear predictions based on constant speed and course values. Such approaches, however, are not capable of predicting more complex ship behavior. Ship navigators likely facilitate such predictions by developing models for level 3 situation awareness through experience. It is, therefore, suggested in this thesis to develop methods that emulate the development of high level human situation awareness. This is facilitated by leveraging machine learning, where navigational experience is artificially represented by historical AIS data. First, methods are developed to emulate human situation awareness by developing categorization functions. In this manner, historical ship behavior is categorized to reflect distinct patterns. To facilitate this, machine learning is leveraged to generate meaningful representations of historical AIS trajectories, and discover clusters of specific behavior. Second, methods are developed to facilitate pattern matching of an observed trajectory segment to clusters of historical ship behavior. Finally, the research in this thesis presents methods to predict future ship behavior with respect to a given cluster. Such predictions are, furthermore, on a scale intended to support proactive collision avoidance actions. Two main approaches are used to facilitate these functions. The first utilizes eigendecomposition-based approaches via locally extracted AIS trajectory segments. Anomaly detection is also facilitated via this approach in support of the outlined functions. The second utilizes deep learning-based approaches applied to regionally extracted trajectories. Both approaches are found to be successful in discovering clusters of specific ship behavior in relevant data sets, classifying a trajectory segment to a given cluster or clusters, as well as predicting the future behavior. Furthermore, the local ship behavior techniques can be trained to facilitate live predictions. The deep learning-based techniques, however, require significantly more training time. These models will, therefore, need to be pre-trained. Once trained, however, the deep learning models will facilitate almost instantaneous predictions

    Change detection of deforestation in the Brazilian Amazon using landsat data and convolutional neural networks

    Get PDF
    Mapping deforestation is an essential step in the process of managing tropical rainforests. It lets us understand and monitor both legal and illegal deforestation and its implications, which include the effect deforestation may have on climate change through greenhouse gas emissions. Given that there is ample room for improvements when it comes to mapping deforestation using satellite imagery, in this study, we aimed to test and evaluate the use of algorithms belonging to the growing field of deep learning (DL), particularly convolutional neural networks (CNNs), to this end. Although studies have been using DL algorithms for a variety of remote sensing tasks for the past few years, they are still relatively unexplored for deforestation mapping. We attempted to map the deforestation between images approximately one year apart, specifically between 2017 and 2018 and between 2018 and 2019. Three CNN architectures that are available in the literature—SharpMask, U-Net, and ResUnet—were used to classify the change between years and were then compared to two classic machine learning (ML) algorithms—random forest (RF) and multilayer perceptron (MLP)—as points of reference. After validation, we found that the DL models were better in most performance metrics including the Kappa index, F1 score, and mean intersection over union (mIoU) measure, while the ResUnet model achieved the best overall results with a value of 0.94 in all three measures in both time sequences. Visually, the DL models also provided classifications with better defined deforestation patches and did not need any sort of post-processing to remove noise, unlike the ML models, which needed some noise removal to improve results

    A Multimodal Feature Selection Method for Remote Sensing Data Analysis Based on Double Graph Laplacian Diagonalization

    Get PDF
    When dealing with multivariate remotely sensed records collected by multiple sensors, an accurate selection of information at the data, feature, or decision level is instrumental in improving the scenes’ characterization. This will also enhance the system’s efficiency and provide more details on modeling the physical phenomena occurring on the Earth’s surface. In this article, we introduce a flexible and efficient method based on graph Laplacians for information selection at different levels of data fusion. The proposed approach combines data structure and information content to address the limitations of existing graph-Laplacian-based methods in dealing with heterogeneous datasets. Moreover, it adapts the selection to each homogenous area of the considered images according to their underlying properties. Experimental tests carried out on several multivariate remote sensing datasets show the consistency of the proposed approach
    • …
    corecore