13 research outputs found

    Cluster-Memory Augmented Deep Autoencoder via Optimal Transportation for Hyperspectral Anomaly Detection

    No full text
    International audienceHyperspectral anomaly detection (AD) aims to detect objects significantly different from their surrounding background. Recently, many detectors based on autoencoder (AE) exhibited promising performances in hyperspectral AD tasks. However, the fundamental hypothesis of the AE-based detector that anomaly is more challenging to be reconstructed than background may not always be true in practice. We demonstrate that an AE could well reconstruct anomalies even without anomalies for training, because AE models mainly focus on the quality of sample reconstruction and do not care if the encoded features solely represent the background rather than anomalies. If more information is preserved than needed to reconstruct the background, the anomalies will be well reconstructed. This article proposes a cluster-memory augmented deep autoencoder via optimal transportation for hyperspectral anomaly detection (OTCMA) clustering for hyperspectral AD to solve this problem. The deep clustering method based on optimal transportation (OT) is proposed to enhance the features consistency of samples within the same categories and features discrimination of samples in different categories. The memory module stores the background’s consistent features, which are the cluster centers for each category background. We retrieve more consistent features from the memory module instead of reconstructing a sample utilizing its own encoded features. The network focuses more on consistent feature reconstruction by training AE with a memory module. This effectively restricts the reconstruction ability of AE and prevents reconstructing anomalies. Extensive experiments on the benchmark datasets demonstrate that our proposed OTCMA achieves state-of-the-art results. Besides, this article presents further discussions about the effectiveness of our proposed memory module and different criteria for better AD

    Hyperspectral Anomaly Detection via Background and Potential Anomaly Dictionaries Construction

    No full text

    Cluster-Memory Augmented Deep Autoencoder via Optimal Transportation for Hyperspectral Anomaly Detection

    No full text
    International audienceHyperspectral anomaly detection (AD) aims to detect objects significantly different from their surrounding background. Recently, many detectors based on autoencoder (AE) exhibited promising performances in hyperspectral AD tasks. However, the fundamental hypothesis of the AE-based detector that anomaly is more challenging to be reconstructed than background may not always be true in practice. We demonstrate that an AE could well reconstruct anomalies even without anomalies for training, because AE models mainly focus on the quality of sample reconstruction and do not care if the encoded features solely represent the background rather than anomalies. If more information is preserved than needed to reconstruct the background, the anomalies will be well reconstructed. This article proposes a cluster-memory augmented deep autoencoder via optimal transportation for hyperspectral anomaly detection (OTCMA) clustering for hyperspectral AD to solve this problem. The deep clustering method based on optimal transportation (OT) is proposed to enhance the features consistency of samples within the same categories and features discrimination of samples in different categories. The memory module stores the background’s consistent features, which are the cluster centers for each category background. We retrieve more consistent features from the memory module instead of reconstructing a sample utilizing its own encoded features. The network focuses more on consistent feature reconstruction by training AE with a memory module. This effectively restricts the reconstruction ability of AE and prevents reconstructing anomalies. Extensive experiments on the benchmark datasets demonstrate that our proposed OTCMA achieves state-of-the-art results. Besides, this article presents further discussions about the effectiveness of our proposed memory module and different criteria for better AD

    Unsupervised Outlier Detection Using Memory and Contrastive Learning

    No full text
    International audienceOutlier detection is to separate anomalous data from inliers in the dataset. Recently, the most deep learning methods of outlier detection leverage an auxiliary reconstruction task by assuming that outliers are more difficult to recover than normal samples (inliers). However, it is not always true in deep auto-encoder (AE) based models. The auto-encoder based detectors may recover certain outliers even if outliers are not in the training data, because they do not constrain the feature learning. Instead, we think outlier detection can be done in the feature space by measuring the distance between outliers’ features and the consistency feature of inliers. To achieve this, we propose an unsupervised outlier detection method using a memory module and a contrastive learning module (MCOD). The memory module constrains the consistency of features, which merely represent the normal data. The contrastive learning module learns more discriminative features, which boosts the distinction between outliers and inliers. Extensive experiments on four benchmark datasets show that our proposed MCOD performs well and outperforms eleven state-of-the-art methods

    Hyperspectral Anomaly Detection via Background and Potential Anomaly Dictionaries Construction

    Full text link
    In this paper, we propose a new anomaly detection method for hyperspectral images based on two well-designed dictionaries: background dictionary and potential anomaly dictionary. In order to effectively detect an anomaly and eliminate the influence of noise, the original image is decomposed into three components: background, anomalies, and noise. In this way, the anomaly detection task is regarded as a problem of matrix decomposition. Considering the homogeneity of background and the sparsity of anomalies, the low-rank and sparse constraints are imposed in our model. Then, the background and potential anomaly dictionaries are constructed using the background and anomaly priors. For the background dictionary, a joint sparse representation (JSR)-based dictionary selection strategy is proposed, assuming that the frequently used atoms in the overcomplete dictionary tend to be the background. In order to make full use of the prior information of anomalies hidden in the scene, the potential anomaly dictionary is constructed. We define a criterion, i.e., the anomalous level of a pixel, by using the residual calculated in the JSR model within its local region. Then, it is combined with a weighted term to alleviate the influence of noise and background. Experiments show that our proposed anomaly detection method based on potential anomaly and background dictionaries construction can achieve superior results compared with other state-of-the-art methods

    Unsupervised Outlier Detection Using Memory and Contrastive Learning

    No full text
    International audienceOutlier detection is to separate anomalous data from inliers in the dataset. Recently, the most deep learning methods of outlier detection leverage an auxiliary reconstruction task by assuming that outliers are more difficult to recover than normal samples (inliers). However, it is not always true in deep auto-encoder (AE) based models. The auto-encoder based detectors may recover certain outliers even if outliers are not in the training data, because they do not constrain the feature learning. Instead, we think outlier detection can be done in the feature space by measuring the distance between outliers’ features and the consistency feature of inliers. To achieve this, we propose an unsupervised outlier detection method using a memory module and a contrastive learning module (MCOD). The memory module constrains the consistency of features, which merely represent the normal data. The contrastive learning module learns more discriminative features, which boosts the distinction between outliers and inliers. Extensive experiments on four benchmark datasets show that our proposed MCOD performs well and outperforms eleven state-of-the-art methods

    Lipid complexation reduces rice starch digestibility and boosts short-chain fatty acid production via gut microbiota

    No full text
    Abstract In this study, two rice varieties (RS4 and GZ93) with different amylose and lipid contents were studied, and their starch was used to prepare starch-palmitic acid complexes. The RS4 samples showed a significantly higher lipid content in their flour, starch, and complex samples compared to GZ93. The static in vitro digestion highlighted that RS4 samples had significantly lower digestibility than the GZ93 samples. The C∞ of the starch-lipid complex samples was found to be 17.7% and 18.5% lower than that of the starch samples in GZ93 and RS4, respectively. The INFOGEST undigested fractions were subsequently used for in vitro colonic fermentation. Short-chain fatty acids (SCFAs) concentrations, mainly acetate, and propionate were significantly higher in starch-lipid complexes compared to native flour or starch samples. Starch-lipid complexes produced a distinctive microbial composition, which resulted in different gene functions, mainly related to pyruvate, fructose, and mannose metabolism. Using Model-based Integration of Metabolite Observations and Species Abundances 2 (MIMOSA2), SCFA production was predicted and associated with the gut microbiota. These results indicated that incorporating lipids into rice starch promotes SCFA production by modulating the gut microbiota selectively

    Element-Wise Feature Relation Learning Network for Cross-Spectral Image Patch Matching

    No full text
    International audienceRecently, the majority of successful matching approaches are based on convolutional neural networks, which focus on learning the invariant and discriminative features for individual image patches based on image content. However, the image patch matching task is essentially to predict the matching relationship of patch pairs, that is, matching (similar) or non-matching (dissimilar). Therefore, we consider that the feature relation (FR) learning is more important than individual feature learning for image patch matching problem. Motivated by this, we propose an element-wise FR learning network for image patch matching, which transforms the image patch matching task into an image relationship-based pattern classification problem and dramatically improves generalization performances on image matching. Meanwhile, the proposed element-wise learning methods encourage full interaction between feature information and can naturally learn FR. Moreover, we propose to aggregate FR from multilevels, which integrates the multiscale FR for more precise matching. Experimental results demonstrate that our proposal achieves superior performances on cross-spectral image patch matching and single spectral image patch matching, and good generalization on image patch retrieval

    Multi-Relation Attention Network for Image Patch Matching

    No full text
    International audienceDeep convolutional neural networks attract increasing attention in image patch matching. However, most of them rely on a single similarity learning model, such as feature distance and the correlation of concatenated features. Their performances will degenerate due to the complex relation between matching patches caused by various imagery changes. To tackle this challenge, we propose a multi-relation attention learning network (MRAN) for image patch matching. Specifically, we propose to fuse multiple feature relations (MR) for matching, which can benefit from the complementary advantages between different feature relations and achieve significant improvements on matching tasks. Furthermore, we propose a relation attention learning module to learn the fused relation adaptively. With this module, meaningful feature relations are emphasized and the others are suppressed. Extensive experiments show that our MRAN achieves best matching performances, and has good generalization on multi-modal image patch matching, multi-modal remote sensing image patch matching and image retrieval tasks

    Element-Wise Feature Relation Learning Network for Cross-Spectral Image Patch Matching

    No full text
    International audienceRecently, the majority of successful matching approaches are based on convolutional neural networks, which focus on learning the invariant and discriminative features for individual image patches based on image content. However, the image patch matching task is essentially to predict the matching relationship of patch pairs, that is, matching (similar) or non-matching (dissimilar). Therefore, we consider that the feature relation (FR) learning is more important than individual feature learning for image patch matching problem. Motivated by this, we propose an element-wise FR learning network for image patch matching, which transforms the image patch matching task into an image relationship-based pattern classification problem and dramatically improves generalization performances on image matching. Meanwhile, the proposed element-wise learning methods encourage full interaction between feature information and can naturally learn FR. Moreover, we propose to aggregate FR from multilevels, which integrates the multiscale FR for more precise matching. Experimental results demonstrate that our proposal achieves superior performances on cross-spectral image patch matching and single spectral image patch matching, and good generalization on image patch retrieval
    corecore