703 research outputs found
Making the End-User a Priority in Benchmarking: OrionBench for Unsupervised Time Series Anomaly Detection
Time series anomaly detection is a prevalent problem in many application
domains such as patient monitoring in healthcare, forecasting in finance, or
predictive maintenance in energy. This has led to the emergence of a plethora
of anomaly detection methods, including more recently, deep learning based
methods. Although several benchmarks have been proposed to compare newly
developed models, they usually rely on one-time execution over a limited set of
datasets and the comparison is restricted to a few models. We propose
OrionBench -- a user centric continuously maintained benchmark for unsupervised
time series anomaly detection. The framework provides universal abstractions to
represent models, extensibility to add new pipelines and datasets,
hyperparameter standardization, pipeline verification, and frequent releases
with published benchmarks. We demonstrate the usage of OrionBench, and the
progression of pipelines across 15 releases published over the course of three
years. Moreover, we walk through two real scenarios we experienced with
OrionBench that highlight the importance of continuous benchmarks in
unsupervised time series anomaly detection
BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection
Hyperspectral anomaly detection (HAD) aims to recognize a minority of
anomalies that are spectrally different from their surrounding background
without prior knowledge. Deep neural networks (DNNs), including autoencoders
(AEs), convolutional neural networks (CNNs) and vision transformers (ViTs),
have shown remarkable performance in this field due to their powerful ability
to model the complicated background. However, for reconstruction tasks, DNNs
tend to incorporate both background and anomalies into the estimated
background, which is referred to as the identical mapping problem (IMP) and
leads to significantly decreased performance. To address this limitation, we
propose a model-independent binary mask-guided separation training strategy for
DNNs, named BiGSeT. Our method introduces a separation training loss based on a
latent binary mask to separately constrain the background and anomalies in the
estimated image. The background is preserved, while the potential anomalies are
suppressed by using an efficient second-order Laplacian of Gaussian (LoG)
operator, generating a pure background estimate. In order to maintain
separability during training, we periodically update the mask using a robust
proportion threshold estimated before the training. In our experiments, We
adopt a vanilla AE as the network to validate our training strategy on several
real-world datasets. Our results show superior performance compared to some
state-of-the-art methods. Specifically, we achieved a 90.67% AUC score on the
HyMap Cooke City dataset. Additionally, we applied our training strategy to
other deep network structures, achieving improved detection performance
compared to their original versions, demonstrating its effective
transferability. The code of our method will be available at
https://github.com/enter-i-username/BiGSeT.Comment: 13 pages, 13 figures, submitted to IEEE TRANSACTIONS ON IMAGE
PROCESSIN
- …