263 research outputs found
Anomaly Detection using a Convolutional Winner-Take-All Autoencoder
We propose a method for video anomaly detection using a winner-take-all convolutional autoencoder that has recently been shown to give competitive results in learning for classification task. The method builds on state of the art approaches to anomaly detection using a convolutional autoencoder and a one-class SVM to build a model of normality. The key novelties are (1) using the motion-feature encoding extracted from a convolutional autoencoder as input to a one-class SVM rather than exploiting reconstruction error of the convolutional autoencoder, and (2) introducing a spatial winner-take-all step after the final encoding layer during training to introduce a high degree of sparsity. We demonstrate an improvement in performance over the state of the art on UCSD and Avenue (CUHK) datasets
Anomaly Detection in Video
Anomaly detection is an area of video analysis that has great importance in automated surveillance. Although it has been extensively studied, there has been little work on using deep convolutional neural networks to learn spatio-temporal feature representations. In this thesis we present novel approaches for learning motion features and modelling normal spatio-temporal dynamics for anomaly detection. The contributions are divided into two main chapters. The first introduces a method that uses a convolutional autoencoder to learn motion features from foreground optical flow patches. The autoencoder is coupled with a spatial sparsity constraint, known as Winner-Take-All, to learn shift-invariant and generic flow-features. This method solves the problem of using hand-crafted feature representations in state of the art methods. Moreover, to capture variations in scale of the patterns of motion as an object moves in depth through the scene,we also divide the image plane into regions and learn a separate normality model in each region. We compare the methods with state of the art approaches on two datasets and demonstrate improved performance.
The second main chapter presents a end-to-end method that learns normal spatio-temporal dynamics from video volumes using a sequence-to-sequence encoder-decoder for prediction and reconstruction. This work is based on the intuition that the encoder-decoder learns to estimate normal sequences in a training set with low error, thus it estimates an abnormal sequence with high error. Error between the network's output and the target is used to classify a video volume as normal or abnormal. In addition to the use of reconstruction error, we also use prediction error for anomaly detection.
We evaluate the second method on three datasets. The prediction models show comparable performance with state of the art methods. In comparison with the first proposed method, performance is improved in one dataset. Moreover, running time is significantly faster
Towards Phytoplankton Parasite Detection Using Autoencoders
Phytoplankton parasites are largely understudied microbial components with a
potentially significant ecological impact on phytoplankton bloom dynamics. To
better understand their impact, we need improved detection methods to integrate
phytoplankton parasite interactions in monitoring aquatic ecosystems. Automated
imaging devices usually produce high amount of phytoplankton image data, while
the occurrence of anomalous phytoplankton data is rare. Thus, we propose an
unsupervised anomaly detection system based on the similarity of the original
and autoencoder-reconstructed samples. With this approach, we were able to
reach an overall F1 score of 0.75 in nine phytoplankton species, which could be
further improved by species-specific fine-tuning. The proposed unsupervised
approach was further compared with the supervised Faster R-CNN based object
detector. With this supervised approach and the model trained on plankton
species and anomalies, we were able to reach the highest F1 score of 0.86.
However, the unsupervised approach is expected to be more universal as it can
detect also unknown anomalies and it does not require any annotated anomalous
data that may not be always available in sufficient quantities. Although other
studies have dealt with plankton anomaly detection in terms of non-plankton
particles, or air bubble detection, our paper is according to our best
knowledge the first one which focuses on automated anomaly detection
considering putative phytoplankton parasites or infections
Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events
As a vital topic in media content interpretation, video anomaly detection
(VAD) has made fruitful progress via deep neural network (DNN). However,
existing methods usually follow a reconstruction or frame prediction routine.
They suffer from two gaps: (1) They cannot localize video activities in a both
precise and comprehensive manner. (2) They lack sufficient abilities to utilize
high-level semantics and temporal context information. Inspired by
frequently-used cloze test in language study, we propose a brand-new VAD
solution named Video Event Completion (VEC) to bridge gaps above: First, we
propose a novel pipeline to achieve both precise and comprehensive enclosure of
video activities. Appearance and motion are exploited as mutually complimentary
cues to localize regions of interest (RoIs). A normalized spatio-temporal cube
(STC) is built from each RoI as a video event, which lays the foundation of VEC
and serves as a basic processing unit. Second, we encourage DNN to capture
high-level semantics by solving a visual cloze test. To build such a visual
cloze test, a certain patch of STC is erased to yield an incomplete event (IE).
The DNN learns to restore the original video event from the IE by inferring the
missing patch. Third, to incorporate richer motion dynamics, another DNN is
trained to infer erased patches' optical flow. Finally, two ensemble strategies
using different types of IE and modalities are proposed to boost VAD
performance, so as to fully exploit the temporal context and modality
information for VAD. VEC can consistently outperform state-of-the-art methods
by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks.
Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.Comment: To be published as an oral paper in Proceedings of the 28th ACM
International Conference on Multimedia (ACM MM '20). 9 pages, 7 figure
Learning normal appearance for fetal anomaly screening: application to the unsupervised detection of Hypoplastic Left Heart Syndrome
Congenital heart disease is considered as one the most common groups of congenital malformations which affects 6 − 11 per 1000 newborns. In this work, an automated framework for detection of cardiac anomalies during ultrasound screening is proposed and evaluated on the example of Hypoplastic Left Heart Syndrome (HLHS), a sub-category of congenital heart disease. We propose an unsupervised approach that learns healthy anatomy exclusively from clinically confirmed normal control patients. We evaluate a number of known anomaly detection frameworks together with a new model architecture based on the α-GAN network and find evidence that the proposed model performs significantly better than the state-of-the-art in image-based anomaly detection, yielding average 0.81 AUC and a better robustness towards initialisation compared to previous works
- …