136 research outputs found

    DNN-based PolSAR image classification on noisy labels

    Get PDF
    Deep neural networks (DNNs) appear to be a solution for the classification of polarimetric synthetic aperture radar (PolSAR) data in that they outperform classical supervised classifiers under the condition of sufficient training samples. The design of a classifier is challenging because DNNs can easily overfit due to limited remote sensing training samples and unavoidable noisy labels. In this article, a softmax loss strategy with antinoise capability, namely, the probability-aware sample grading strategy (PASGS), is developed to overcome this limitation. Combined with the proposed softmax loss strategy, two classical DNN-based classifiers are implemented to perform PolSAR image classification to demonstrate its effectiveness. In this framework, the difference distribution implicitly reflects the probability that a training sample is clean, and clean labels can be distinguished from noisy labels according to the method of probability statistics. Then, this probability is employed to reweight the corresponding loss of each training sample during the training process to locate the noisy data and to prevent participation in the loss calculation of the neural network. As the number of training iterations increases, the condition of the probability statistics of the noisy labels will be constantly adjusted without supervision, and the clean labels will eventually be identified to train the neural network. Experiments on three PolSAR datasets with two DNN-based methods also demonstrate that the proposed method is superior to state-of-the-art methods.This work was supported in part by the National Natural Science Foundation of China under Grant 61871413 and Grant 61801015, in part by the Fundamental Research Funds for the Central Universities under Grant XK2020-03, in part by China Scholarship Council under Grant 2020006880033, and in part by Grant PID2020-114623RB-C32 funded by MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Detection of Fish Fillet Substitution and Mislabeling Using Multimode Hyperspectral Imaging Techniques

    Get PDF
    Substitution of high-priced fish species with inexpensive alternatives and mislabeling frozen-thawed fish fillets as fresh are two important fraudulent practices of concern in the seafood industry. This study aimed to develop multimode hyperspectral imaging techniques to detect substitution and mislabeling of fish fillets. Line-scan hyperspectral images were acquired from fish fillets in four modes, including reflectance in visible and near-infrared (VNIR) region, fluorescence by 365 nm UV excitation, reflectance in short-wave infrared (SWIR) region, and Raman by 785 nm laser excitation. Fish fillets of six species (i.e., red snapper, vermilion snapper, Malabar snapper, summer flounder, white bass, and tilapia) were used for species differentiation and frozen-thawed red snapper fillets were used for freshness evaluation. All fillet samples were DNA tested to authenticate the species. A total of 24 machine learning classifiers in six categories (i.e., decision trees, discriminant analysis, Naive Bayes classifiers, support vector machines, k-nearest neighbor classifiers, and ensemble classifiers) were used for fish species and freshness classifications using four types of spectral data in three different datasets (i.e., full spectra, first ten components of principal component analysis, and bands selected by sequential feature selection method). The highest accuracies were achieved at 100% using full VNIR reflectance spectra for the species classification and 99.9% using full SWIR reflectance spectra for the freshness classification. The VNIR reflectance mode gave the overall best performance for both species and freshness inspection, and it will be further investigated as a rapid technique for detection of fish fillet substitution and mislabeling

    Automated classification of heat sources detected using SWIR remote sensing

    Get PDF
    Abstract The potential of shortwave infrared (SWIR) remote sensing to detect hotspots has been investigated using satellite data for decades. The hotspots detected by satellite SWIR sensors include very high-temperature heat sources such as wildfires, volcanoes, industrial activity, or open burning. This study proposes an automated classification method of heat source detected utilizing Landsat 8 and Sentinel-2 data. We created training data of heat sources via visual inspection of hotspots detected by Landsat 8. A scheme to classify heat sources for daytime data was developed by combining classification methods based on a Convolutional Neural Network (CNN) algorithm utilizing spatial features and a decision tree algorithm based on thematic land-cover information and our time series detection record. Validation work using 10,959 classification results corresponding to hotspots acquired from May 2017 to July 2019 indicated that the two classification results were in 79.7% agreement. For hotspots where the two classification schemes agreed, the classification was 97.9% accurate. Even when the results of the two classification schemes conflicted, either was correct in 73% of the samples. To improve the accuracy, the heat source category was re-allocated to the most probable category corresponding to the combination of the results from the two methods. Integrating the two approaches achieved an overall accuracy of 92.8%. In contrast, the overall accuracy for heat source classification during nighttime reached 79.3% because only the decision tree-based classification was applicable to limited available data. Comparison with the Visible Infrared Imaging Radiometer Suite (VIIRS) fire product revealed that, despite the limited data acquisition frequency of Landsat 8, regional tendencies in hotspot occurrence were qualitatively appropriate for an annual period on a global scale

    Robust Normalized Softmax Loss for Deep Metric Learning-Based Characterization of Remote Sensing Images With Label Noise

    Get PDF
    Most deep metric learning-based image characterization methods exploit supervised information to model the semantic relations among the remote sensing (RS) scenes. Nonetheless, the unprecedented availability of large-scale RS data makes the annotation of such images very challenging, requiring automated supportive processes. Whether the annotation is assisted by aggregation or crowd-sourcing, the RS large-variance problem, together with other important factors [e.g., geo-location/registration errors, land-cover changes, even low-quality Volunteered Geographic Information (VGI), etc.] often introduce the so-called label noise, i.e., semantic annotation errors. In this article, we first investigate the deep metric learning-based characterization of RS images with label noise and propose a novel loss formulation, named robust normalized softmax loss (RNSL), for robustly learning the metrics among RS scenes. Specifically, our RNSL improves the robustness of the normalized softmax loss (NSL), commonly utilized for deep metric learning, by replacing its logarithmic function with the negative Box–Cox transformation in order to down-weight the contributions from noisy images on the learning of the corresponding class prototypes. Moreover, by truncating the loss with a certain threshold, we also propose a truncated robust normalized softmax loss (t-RNSL) which can further enforce the learning of class prototypes based on the image features with high similarities between them, so that the intraclass features can be well grouped and interclass features can be well separated. Our experiments, conducted on two benchmark RS data sets, validate the effectiveness of the proposed approach with respect to different state-of-the-art methods in three different downstream applications (classification, clustering, and retrieval). The codes of this article will be publicly available from https://github.com/jiankang1991

    Local block multilayer sparse extreme learning machine for effective feature extraction and classification of hyperspectral images.

    Get PDF
    Although extreme learning machines (ELM) have been successfully applied for the classification of hyperspectral images (HSIs), they still suffer from three main drawbacks. These include: 1) ineffective feature extraction (FE) in HSIs due to a single hidden layer neuron network used; 2) ill-posed problems caused by the random input weights and biases; and 3) lack of spatial information for HSIs classification. To tackle the first problem, we construct a multilayer ELM for effective FE from HSIs. The sparse representation is adopted with the multilayer ELM to tackle the ill-posed problem of ELM, which can be solved by the alternative direction method of multipliers. This has resulted in the proposed multilayer sparse ELM (MSELM) model. Considering that the neighboring pixels are more likely from the same class, a local block extension is introduced for MSELM to extract the local spatial information, leading to the local block MSELM (LBMSELM). The loopy belief propagation is also applied to the proposed MSELM and LBMSELM approaches to further utilize the rich spectral and spatial information for improving the classification. Experimental results show that the proposed methods have outperformed the ELM and other state-of-the-art approaches

    Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey

    Full text link
    Image classification systems recently made a giant leap with the advancement of deep neural networks. However, these systems require an excessive amount of labeled data to be adequately trained. Gathering a correctly annotated dataset is not always feasible due to several factors, such as the expensiveness of the labeling process or difficulty of correctly classifying data, even for the experts. Because of these practical challenges, label noise is a common problem in real-world datasets, and numerous methods to train deep neural networks with label noise are proposed in the literature. Although deep neural networks are known to be relatively robust to label noise, their tendency to overfit data makes them vulnerable to memorizing even random noise. Therefore, it is crucial to consider the existence of label noise and develop counter algorithms to fade away its adverse effects to train deep neural networks efficiently. Even though an extensive survey of machine learning techniques under label noise exists, the literature lacks a comprehensive survey of methodologies centered explicitly around deep learning in the presence of noisy labels. This paper aims to present these algorithms while categorizing them into one of the two subgroups: noise model based and noise model free methods. Algorithms in the first group aim to estimate the noise structure and use this information to avoid the adverse effects of noisy labels. Differently, methods in the second group try to come up with inherently noise robust algorithms by using approaches like robust losses, regularizers or other learning paradigms

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Noise Models in Classification: Unified Nomenclature, Extended Taxonomy and Pragmatic Categorization

    Get PDF
    This paper presents the first review of noise models in classification covering both label and attribute noise. Their study reveals the lack of a unified nomenclature in this field. In order to address this problem, a tripartite nomenclature based on the structural analysis of existing noise models is proposed. Additionally, a revision of their current taxonomies is carried out, which are combined and updated to better reflect the nature of any model. Finally, a categorization of noise models is proposed from a practical point of view depending on the characteristics of noise and the study purpose. These contributions provide a variety of models to introduce noise, their characteristics according to the proposed taxonomy and a unified way of naming them, which will facilitate their identification and study, as well as the reproducibility of future research

    A robust dynamic classifier selection approach for hyperspectral images with imprecise label information

    Get PDF
    Supervised hyperspectral image (HSI) classification relies on accurate label information. However, it is not always possible to collect perfectly accurate labels for training samples. This motivates the development of classifiers that are sufficiently robust to some reasonable amounts of errors in data labels. Despite the growing importance of this aspect, it has not been sufficiently studied in the literature yet. In this paper, we analyze the effect of erroneous sample labels on probability distributions of the principal components of HSIs, and provide in this way a statistical analysis of the resulting uncertainty in classifiers. Building on the theory of imprecise probabilities, we develop a novel robust dynamic classifier selection (R-DCS) model for data classification with erroneous labels. Particularly, spectral and spatial features are extracted from HSIs to construct two individual classifiers for the dynamic selection, respectively. The proposed R-DCS model is based on the robustness of the classifiers’ predictions: the extent to which a classifier can be altered without changing its prediction. We provide three possible selection strategies for the proposed model with different computational complexities and apply them on three benchmark data sets. Experimental results demonstrate that the proposed model outperforms the individual classifiers it selects from and is more robust to errors in labels compared to widely adopted approaches

    Reduced and coded sensing methods for x-ray based security

    Full text link
    Current x-ray technologies provide security personnel with non-invasive sub-surface imaging and contraband detection in various portal screening applications such as checked and carry-on baggage as well as cargo. Computed tomography (CT) scanners generate detailed 3D imagery in checked bags; however, these scanners often require significant power, cost, and space. These tomography machines are impractical for many applications where space and power are often limited such as checkpoint areas. Reducing the amount of data acquired would help reduce the physical demands of these systems. Unfortunately this leads to the formation of artifacts in various applications, thus presenting significant challenges in reconstruction and classification. As a result, the goal is to maintain a certain level of image quality but reduce the amount of data gathered. For the security domain this would allow for faster and cheaper screening in existing systems or allow for previously infeasible screening options due to other operational constraints. While our focus is predominantly on security applications, many of the techniques can be extended to other fields such as the medical domain where a reduction of dose can allow for safer and more frequent examinations. This dissertation aims to advance data reduction algorithms for security motivated x-ray imaging in three main areas: (i) development of a sensing aware dimensionality reduction framework, (ii) creation of linear motion tomographic method of object scanning and associated reconstruction algorithms for carry-on baggage screening, and (iii) the application of coded aperture techniques to improve and extend imaging performance of nuclear resonance fluorescence in cargo screening. The sensing aware dimensionality reduction framework extends existing dimensionality reduction methods to include knowledge of an underlying sensing mechanism of a latent variable. This method provides an improved classification rate over classical methods on both a synthetic case and a popular face classification dataset. The linear tomographic method is based on non-rotational scanning of baggage moved by a conveyor belt, and can thus be simpler, smaller, and more reliable than existing rotational tomography systems at the expense of more challenging image formation problems that require special model-based methods. The reconstructions for this approach are comparable to existing tomographic systems. Finally our coded aperture extension of existing nuclear resonance fluorescence cargo scanning provides improved observation signal-to-noise ratios. We analyze, discuss, and demonstrate the strengths and challenges of using coded aperture techniques in this application and provide guidance on regimes where these methods can yield gains over conventional methods
    • …
    corecore