4,512 research outputs found
Real-time self-adaptive deep stereo
Deep convolutional neural networks trained end-to-end are the
state-of-the-art methods to regress dense disparity maps from stereo pairs.
These models, however, suffer from a notable decrease in accuracy when exposed
to scenarios significantly different from the training set, e.g., real vs
synthetic images, etc.). We argue that it is extremely unlikely to gather
enough samples to achieve effective training/tuning in any target domain, thus
making this setup impractical for many applications. Instead, we propose to
perform unsupervised and continuous online adaptation of a deep stereo network,
which allows for preserving its accuracy in any environment. However, this
strategy is extremely computationally demanding and thus prevents real-time
inference. We address this issue introducing a new lightweight, yet effective,
deep stereo architecture, Modularly ADaptive Network (MADNet) and developing a
Modular ADaptation (MAD) algorithm, which independently trains sub-portions of
the network. By deploying MADNet together with MAD we introduce the first
real-time self-adaptive deep stereo system enabling competitive performance on
heterogeneous datasets.Comment: Accepted at CVPR2019 as oral presentation. Code Available
https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stere
Talk2Car: Taking Control of Your Self-Driving Car
A long-term goal of artificial intelligence is to have an agent execute
commands communicated through natural language. In many cases the commands are
grounded in a visual environment shared by the human who gives the command and
the agent. Execution of the command then requires mapping the command into the
physical visual space, after which the appropriate action can be taken. In this
paper we consider the former. Or more specifically, we consider the problem in
an autonomous driving setting, where a passenger requests an action that can be
associated with an object found in a street scene. Our work presents the
Talk2Car dataset, which is the first object referral dataset that contains
commands written in natural language for self-driving cars. We provide a
detailed comparison with related datasets such as ReferIt, RefCOCO, RefCOCO+,
RefCOCOg, Cityscape-Ref and CLEVR-Ref. Additionally, we include a performance
analysis using strong state-of-the-art models. The results show that the
proposed object referral task is a challenging one for which the models show
promising results but still require additional research in natural language
processing, computer vision and the intersection of these fields. The dataset
can be found on our website: http://macchina-ai.eu/Comment: 14 pages, accepted at emnlp-ijcnlp 2019 - Added Talk2Nav Referenc
- …