55 research outputs found
Towards exploring adversarial learning for anomaly detection in complex driving scenes
One of the many Autonomous Systems (ASs), such as autonomous driving cars,
performs various safety-critical functions. Many of these autonomous systems
take advantage of Artificial Intelligence (AI) techniques to perceive their
environment. But these perceiving components could not be formally verified,
since, the accuracy of such AI-based components has a high dependency on the
quality of training data. So Machine learning (ML) based anomaly detection, a
technique to identify data that does not belong to the training data could be
used as a safety measuring indicator during the development and operational
time of such AI-based components. Adversarial learning, a sub-field of machine
learning has proven its ability to detect anomalies in images and videos with
impressive results on simple data sets. Therefore, in this work, we investigate
and provide insight into the performance of such techniques on a highly complex
driving scenes dataset called Berkeley DeepDrive.Comment: 2
Towards exploring adversarial learning for anomaly detection in complex driving scenes
One of the many Autonomous Systems (ASs), such as autonomous driving cars, performs various safety-critical functions. Manyof these autonomous systems take advantage of Artificial Intelligence (AI) techniques to perceive their environment. But these perceiving components could not be formally verified, since, the accuracy of such AI-based components has a high dependency on the quality of training data. So Machine learning (ML) based anomaly detection, a technique to identify data that does not belong to the training data could be used as a safety measuring indicator during the development and operational time of such AI-based components. Adversarial learning, a sub-field of machine learning has proven its ability to detect anomalies in images and videos with impressive results on simple data sets. Therefore, in this work, we investigate and provide insight into the performance of such techniques on a highly complex driving scenes dataset called Berkeley DeepDrive
Fast and Efficient Scene Categorization for Autonomous Driving using VAEs
Scene categorization is a useful precursor task that provides prior knowledge
for many advanced computer vision tasks with a broad range of applications in
content-based image indexing and retrieval systems. Despite the success of data
driven approaches in the field of computer vision such as object detection,
semantic segmentation, etc., their application in learning high-level features
for scene recognition has not achieved the same level of success. We propose to
generate a fast and efficient intermediate interpretable generalized global
descriptor that captures coarse features from the image and use a
classification head to map the descriptors to 3 scene categories: Rural, Urban
and Suburban. We train a Variational Autoencoder in an unsupervised manner and
map images to a constrained multi-dimensional latent space and use the latent
vectors as compact embeddings that serve as global descriptors for images. The
experimental results evidence that the VAE latent vectors capture coarse
information from the image, supporting their usage as global descriptors. The
proposed global descriptor is very compact with an embedding length of 128,
significantly faster to compute, and is robust to seasonal and illuminational
changes, while capturing sufficient scene information required for scene
categorization.Comment: Published in the 24th Irish Machine Vision and Image Processing
Conference (IMVIP 2022
Heteroscedastic Uncertainty for Robust Generative Latent Dynamics
Learning or identifying dynamics from a sequence of high-dimensional
observations is a difficult challenge in many domains, including reinforcement
learning and control. The problem has recently been studied from a generative
perspective through latent dynamics: high-dimensional observations are embedded
into a lower-dimensional space in which the dynamics can be learned. Despite
some successes, latent dynamics models have not yet been applied to real-world
robotic systems where learned representations must be robust to a variety of
perceptual confounds and noise sources not seen during training. In this paper,
we present a method to jointly learn a latent state representation and the
associated dynamics that is amenable for long-term planning and closed-loop
control under perceptually difficult conditions. As our main contribution, we
describe how our representation is able to capture a notion of heteroscedastic
or input-specific uncertainty at test time by detecting novel or
out-of-distribution (OOD) inputs. We present results from prediction and
control experiments on two image-based tasks: a simulated pendulum balancing
task and a real-world robotic manipulator reaching task. We demonstrate that
our model produces significantly more accurate predictions and exhibits
improved control performance, compared to a model that assumes homoscedastic
uncertainty only, in the presence of varying degrees of input degradation.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Intelligent Robots and Systems (IROS'20),
Las Vegas, USA, October 25-29, 202
Autoencoder-based semantic novelty detection: towards dependable AI-based systems
Many autonomous systems, such as driverless taxis, perform safety-critical functions. Autonomous systems employ artificial intelligence (AI) techniques, specifically for environmental perception. Engineers cannot completely test or formally verify AI-based autonomous systems. The accuracy of AI-based systems depends on the quality of training data. Thus, novelty detection, that is, identifying data that differ in some respect from the data used for training, becomes a safety measure for system development and operation. In this study, we propose a new architecture for autoencoder-based semantic novelty detection with two innovations: architectural guidelines for a semantic autoencoder topology and a semantic error calculation as novelty criteria. We demonstrate that such a semantic novelty detection outperforms autoencoder-based novelty detection approaches known from the literature by minimizing false negatives
Anomaly Detection in the Latent Space of VAEs
One of the most important challenges in the development of autonomous driving systems is to make them robust against unexpected or unknown objects. Many of these systems perform really good in a controlled environment where they encounter situation for which they have been trained. In order for them to be safely deployed in the real world, they need to be aware if they encounter situations or novel objects for which the have not been sufficiently trained for in order to prevent possibly dangerous behavior. In reality, they often fail when dealing with such kind of anomalies, and do so without any signs of uncertainty in their predictions. This thesis focuses on the problem of detecting anomalous objects in road images in the latent space of a VAE. For that, normal and anomalous data was used to train the VAE to fit the data onto two prior distributions. This essentially trains the VAE to create an anomaly and a normal cluster. This structure of the latent space makes it possible to detect anomalies in it by using clustering algorithms like k-means. Multiple experiments were carried out in order to improve to separation of normal and anomalous data in the latent space. To test this approach, anomaly data from multiple datasets was used in order to evaluate the detection of anomalies. The approach described in this thesis was able to detect almost all images containing anomalous objects but also suffers from a high false positive rate which still is a common problem of many anomaly detection methods
- …