74,070 research outputs found
Towards Anomaly Explanation in Feature Models
Feature models are a wide-spread approach to variability
and commonality management in software
product lines. Due to the increasing size and complexity
of feature models, anomalies in terms of
inconsistencies and redundancies can occur which
lead to increased efforts related to feature model
development and maintenance. In this paper we introduce
knowledge representations which serve as
a basis for the explanation of anomalies in feature
models. On the basis of these representations we
show how explanation algorithms can be applied.
The results of a performance analysis show the applicability
of these algorithms for anomaly detection
in feature models. We conclude the paper with
a discussion of future research issues
Towards Visually Explaining Variational Autoencoders
Recent advances in Convolutional Neural Network (CNN) model interpretability
have led to impressive progress in visualizing and understanding model
predictions. In particular, gradient-based visual attention methods have driven
much recent effort in using visual attention maps as a means for visual
explanations. A key problem, however, is these methods are designed for
classification and categorization tasks, and their extension to explaining
generative models, e.g. variational autoencoders (VAE) is not trivial. In this
work, we take a step towards bridging this crucial gap, proposing the first
technique to visually explain VAEs by means of gradient-based attention. We
present methods to generate visual attention from the learned latent space, and
also demonstrate such attention explanations serve more than just explaining
VAE predictions. We show how these attention maps can be used to localize
anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD
dataset. We also show how they can be infused into model training, helping
bootstrap the VAE into learning improved latent space disentanglement,
demonstrated on the Dsprites dataset
Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features
One-class support vector machine (OC-SVM) for a long time has been one of the
most effective anomaly detection methods and extensively adopted in both
research as well as industrial applications. The biggest issue for OC-SVM is
yet the capability to operate with large and high-dimensional datasets due to
optimization complexity. Those problems might be mitigated via dimensionality
reduction techniques such as manifold learning or autoencoder. However,
previous work often treats representation learning and anomaly prediction
separately. In this paper, we propose autoencoder based one-class support
vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier
features to approximate the radial basis kernel, into deep learning context by
combining it with a representation learning architecture and jointly exploit
stochastic gradient descent to obtain end-to-end training. Interestingly, this
also opens up the possible use of gradient-based attribution methods to explain
the decision making for anomaly detection, which has ever been challenging as a
result of the implicit mappings between the input space and the kernel space.
To the best of our knowledge, this is the first work to study the
interpretability of deep learning in anomaly detection. We evaluate our method
on a wide range of unsupervised anomaly detection tasks in which our end-to-end
training architecture achieves a performance significantly better than the
previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201
Origins of conductance anomalies in a p-type GaAs quantum point contact
Low temperature transport measurements on a p-GaAs quantum point contact are
presented which reveal the presence of a conductance anomaly that is markedly
different from the conventional `0.7 anomaly'. A lateral shift by asymmetric
gating of the conducting channel is utilized to identify and separate different
conductance anomalies of local and generic origins experimentally. While the
more generic 0.7 anomaly is not directly affected by changing the gate
configuration, a model is proposed which attributes the additional conductance
features to a gate-dependent coupling of the propagating states to localized
states emerging due to a nearby potential imperfection. Finite bias
conductivity measurements reveal the interplay between the two anomalies
consistently with a two-impurity Kondo model
- …