59,960 research outputs found
Learning Complexity-Aware Cascades for Deep Pedestrian Detection
The design of complexity-aware cascaded detectors, combining features of very
different complexities, is considered. A new cascade design procedure is
introduced, by formulating cascade learning as the Lagrangian optimization of a
risk that accounts for both accuracy and complexity. A boosting algorithm,
denoted as complexity aware cascade training (CompACT), is then derived to
solve this optimization. CompACT cascades are shown to seek an optimal
trade-off between accuracy and complexity by pushing features of higher
complexity to the later cascade stages, where only a few difficult candidate
patches remain to be classified. This enables the use of features of vastly
different complexities in a single detector. In result, the feature pool can be
expanded to features previously impractical for cascade design, such as the
responses of a deep convolutional neural network (CNN). This is demonstrated
through the design of a pedestrian detector with a pool of features whose
complexities span orders of magnitude. The resulting cascade generalizes the
combination of a CNN with an object proposal mechanism: rather than a
pre-processing stage, CompACT cascades seamlessly integrate CNNs in their
stages. This enables state of the art performance on the Caltech and KITTI
datasets, at fairly fast speeds
From Facial Parts Responses to Face Detection: A Deep Learning Approach
In this paper, we propose a novel deep convolutional network (DCN) that
achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically,
our method achieves a high recall rate of 90.99% on the challenging FDDB
benchmark, outperforming the state-of-the-art method by a large margin of
2.91%. Importantly, we consider finding faces from a new perspective through
scoring facial parts responses by their spatial structure and arrangement. The
scoring mechanism is carefully formulated considering challenging cases where
faces are only partially visible. This consideration allows our network to
detect faces under severe occlusion and unconstrained pose variation, which are
the main difficulty and bottleneck of most existing face detection approaches.
We show that despite the use of DCN, our network can achieve practical runtime
speed.Comment: To appear in ICCV 201
Object-Oriented Dynamics Learning through Multi-Level Abstraction
Object-based approaches for learning action-conditioned dynamics has
demonstrated promise for generalization and interpretability. However, existing
approaches suffer from structural limitations and optimization difficulties for
common environments with multiple dynamic objects. In this paper, we present a
novel self-supervised learning framework, called Multi-level Abstraction
Object-oriented Predictor (MAOP), which employs a three-level learning
architecture that enables efficient object-based dynamics learning from raw
visual observations. We also design a spatial-temporal relational reasoning
mechanism for MAOP to support instance-level dynamics learning and handle
partial observability. Our results show that MAOP significantly outperforms
previous methods in terms of sample efficiency and generalization over novel
environments for learning environment models. We also demonstrate that learned
dynamics models enable efficient planning in unseen environments, comparable to
true environment models. In addition, MAOP learns semantically and visually
interpretable disentangled representations.Comment: Accepted to the Thirthy-Fourth AAAI Conference On Artificial
Intelligence (AAAI), 202
Deep Object-Centric Representations for Generalizable Robot Learning
Robotic manipulation in complex open-world scenarios requires both reliable
physical manipulation skills and effective and generalizable perception. In
this paper, we propose a method where general purpose pretrained visual models
serve as an object-centric prior for the perception system of a learned policy.
We devise an object-level attentional mechanism that can be used to determine
relevant objects from a few trajectories or demonstrations, and then
immediately incorporate those objects into a learned policy. A task-independent
meta-attention locates possible objects in the scene, and a task-specific
attention identifies which objects are predictive of the trajectories. The
scope of the task-specific attention is easily adjusted by showing
demonstrations with distractor objects or with diverse relevant objects. Our
results indicate that this approach exhibits good generalization across object
instances using very few samples, and can be used to learn a variety of
manipulation tasks using reinforcement learning
Segmentation-by-Detection: A Cascade Network for Volumetric Medical Image Segmentation
We propose an attention mechanism for 3D medical image segmentation. The
method, named segmentation-by-detection, is a cascade of a detection module
followed by a segmentation module. The detection module enables a region of
interest to come to attention and produces a set of object region candidates
which are further used as an attention model. Rather than dealing with the
entire volume, the segmentation module distills the information from the
potential region. This scheme is an efficient solution for volumetric data as
it reduces the influence of the surrounding noise which is especially important
for medical data with low signal-to-noise ratio. Experimental results on 3D
ultrasound data of the femoral head shows superiority of the proposed method
when compared with a standard fully convolutional network like the U-Net
Deep Interactive Region Segmentation and Captioning
With recent innovations in dense image captioning, it is now possible to
describe every object of the scene with a caption while objects are determined
by bounding boxes. However, interpretation of such an output is not trivial due
to the existence of many overlapping bounding boxes. Furthermore, in current
captioning frameworks, the user is not able to involve personal preferences to
exclude out of interest areas. In this paper, we propose a novel hybrid deep
learning architecture for interactive region segmentation and captioning where
the user is able to specify an arbitrary region of the image that should be
processed. To this end, a dedicated Fully Convolutional Network (FCN) named
Lyncean FCN (LFCN) is trained using our special training data to isolate the
User Intention Region (UIR) as the output of an efficient segmentation. In
parallel, a dense image captioning model is utilized to provide a wide variety
of captions for that region. Then, the UIR will be explained with the caption
of the best match bounding box. To the best of our knowledge, this is the first
work that provides such a comprehensive output. Our experiments show the
superiority of the proposed approach over state-of-the-art interactive
segmentation methods on several well-known datasets. In addition, replacement
of the bounding boxes with the result of the interactive segmentation leads to
a better understanding of the dense image captioning output as well as accuracy
enhancement for the object detection in terms of Intersection over Union (IoU).Comment: 17, pages, 9 figure
- …