107,368 research outputs found
Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong
performance on RGB salient object detection. Although, depth information can
help improve detection results, the exploration of CNNs for RGB-D salient
object detection remains limited. Here we propose a novel deep CNN architecture
for RGB-D salient object detection that exploits high-level, mid-level, and low
level features. Further, we present novel depth features that capture the ideas
of background enclosure and depth contrast that are suitable for a learned
approach. We show improved results compared to state-of-the-art RGB-D salient
object detection methods. We also show that the low-level and mid-level depth
features both contribute to improvements in the results. Especially, F-Score of
our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second
place
Lifting GIS Maps into Strong Geometric Context for Scene Understanding
Contextual information can have a substantial impact on the performance of
visual tasks such as semantic segmentation, object detection, and geometric
estimation. Data stored in Geographic Information Systems (GIS) offers a rich
source of contextual information that has been largely untapped by computer
vision. We propose to leverage such information for scene understanding by
combining GIS resources with large sets of unorganized photographs using
Structure from Motion (SfM) techniques. We present a pipeline to quickly
generate strong 3D geometric priors from 2D GIS data using SfM models aligned
with minimal user input. Given an image resectioned against this model, we
generate robust predictions of depth, surface normals, and semantic labels. We
show that the precision of the predicted geometry is substantially more
accurate other single-image depth estimation methods. We then demonstrate the
utility of these contextual constraints for re-scoring pedestrian detections,
and use these GIS contextual features alongside object detection score maps to
improve a CRF-based semantic segmentation framework, boosting accuracy over
baseline models
Finding any Waldo: zero-shot invariant and efficient visual search
Searching for a target object in a cluttered scene constitutes a fundamental
challenge in daily vision. Visual search must be selective enough to
discriminate the target from distractors, invariant to changes in the
appearance of the target, efficient to avoid exhaustive exploration of the
image, and must generalize to locate novel target objects with zero-shot
training. Previous work has focused on searching for perfect matches of a
target after extensive category-specific training. Here we show for the first
time that humans can efficiently and invariantly search for natural objects in
complex scenes. To gain insight into the mechanisms that guide visual search,
we propose a biologically inspired computational model that can locate targets
without exhaustive sampling and generalize to novel objects. The model provides
an approximation to the mechanisms integrating bottom-up and top-down signals
during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
Perceiving animacy from shape
Superordinate visual classificationfor example, identifying an image as animal, plant, or mineralis computationally challenging because radically different items (e.g., octopus, dog) must be grouped into a common class (animal). It is plausible that learning superordinate categories teaches us not only the membership of particular (familiar) items, but also general features that are shared across class members, aiding us in classifying novel (unfamiliar) items. Here, we investigated visual shape features associated with animate and inanimate classes. One group of participants viewed images of 75 unfamiliar and atypical items and provided separate ratings of how much each image looked like an animal, plant, and mineral. Results show systematic tradeoffs between the ratings, indicating a class-like organization of items. A second group rated each image in terms of 22 midlevel shape features (e.g., symmetrical, curved). The results confirm that superordinate classes are associated with particular shape features (e.g., animals generally have high symmetry ratings). Moreover, linear discriminant analysis based on the 22-D feature vectors predicts the perceived classes approximately as well as the ground truth classification. This suggests that a generic set of midlevel visual shape features forms the basis for superordinate classification of novel objects along the animacy continuum
Multimodal 3D Object Detection from Simulated Pretraining
The need for simulated data in autonomous driving applications has become
increasingly important, both for validation of pretrained models and for
training new models. In order for these models to generalize to real-world
applications, it is critical that the underlying dataset contains a variety of
driving scenarios and that simulated sensor readings closely mimics real-world
sensors. We present the Carla Automated Dataset Extraction Tool (CADET), a
novel tool for generating training data from the CARLA simulator to be used in
autonomous driving research. The tool is able to export high-quality,
synchronized LIDAR and camera data with object annotations, and offers
configuration to accurately reflect a real-life sensor array. Furthermore, we
use this tool to generate a dataset consisting of 10 000 samples and use this
dataset in order to train the 3D object detection network AVOD-FPN, with
finetuning on the KITTI dataset in order to evaluate the potential for
effective pretraining. We also present two novel LIDAR feature map
configurations in Bird's Eye View for use with AVOD-FPN that can be easily
modified. These configurations are tested on the KITTI and CADET datasets in
order to evaluate their performance as well as the usability of the simulated
dataset for pretraining. Although insufficient to fully replace the use of real
world data, and generally not able to exceed the performance of systems fully
trained on real data, our results indicate that simulated data can considerably
reduce the amount of training on real data required to achieve satisfactory
levels of accuracy.Comment: 12 pages, part of proceedings for the NAIS 2019 symposiu
- …