60,925 research outputs found
Real-to-Virtual Domain Unification for End-to-End Autonomous Driving
In the spectrum of vision-based autonomous driving, vanilla end-to-end models
are not interpretable and suboptimal in performance, while mediated perception
models require additional intermediate representations such as segmentation
masks or detection bounding boxes, whose annotation can be prohibitively
expensive as we move to a larger scale. More critically, all prior works fail
to deal with the notorious domain shift if we were to merge data collected from
different sources, which greatly hinders the model generalization ability. In
this work, we address the above limitations by taking advantage of virtual data
collected from driving simulators, and present DU-drive, an unsupervised
real-to-virtual domain unification framework for end-to-end autonomous driving.
It first transforms real driving data to its less complex counterpart in the
virtual domain and then predicts vehicle control commands from the generated
virtual image. Our framework has three unique advantages: 1) it maps driving
data collected from a variety of source distributions into a unified domain,
effectively eliminating domain shift; 2) the learned virtual representation is
simpler than the input real image and closer in form to the "minimum sufficient
statistic" for the prediction task, which relieves the burden of the
compression phase while optimizing the information bottleneck tradeoff and
leads to superior prediction performance; 3) it takes advantage of annotated
virtual data which is unlimited and free to obtain. Extensive experiments on
two public driving datasets and two driving simulators demonstrate the
performance superiority and interpretive capability of DU-drive
Towards Visually Explaining Variational Autoencoders
Recent advances in Convolutional Neural Network (CNN) model interpretability
have led to impressive progress in visualizing and understanding model
predictions. In particular, gradient-based visual attention methods have driven
much recent effort in using visual attention maps as a means for visual
explanations. A key problem, however, is these methods are designed for
classification and categorization tasks, and their extension to explaining
generative models, e.g. variational autoencoders (VAE) is not trivial. In this
work, we take a step towards bridging this crucial gap, proposing the first
technique to visually explain VAEs by means of gradient-based attention. We
present methods to generate visual attention from the learned latent space, and
also demonstrate such attention explanations serve more than just explaining
VAE predictions. We show how these attention maps can be used to localize
anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD
dataset. We also show how they can be infused into model training, helping
bootstrap the VAE into learning improved latent space disentanglement,
demonstrated on the Dsprites dataset
- …