393,625 research outputs found
Children, Humanoid Robots and Caregivers
This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child's Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child's dependence on her mother for learning while becoming awareness of her own individuality, and by self-exploration of her physical surroundings. We propose a learning framework for a humanoid robot inspired on such cognitive development
Relation Networks for Object Detection
Although it is well believed for years that modeling relations between
objects would help object recognition, there has not been evidence that the
idea is working in the deep learning era. All state-of-the-art object detection
systems still rely on recognizing object instances individually, without
exploiting their relations during learning.
This work proposes an object relation module. It processes a set of objects
simultaneously through interaction between their appearance feature and
geometry, thus allowing modeling of their relations. It is lightweight and
in-place. It does not require additional supervision and is easy to embed in
existing networks. It is shown effective on improving object recognition and
duplicate removal steps in the modern object detection pipeline. It verifies
the efficacy of modeling object relations in CNN based detection. It gives rise
to the first fully end-to-end object detector
Active Object Localization in Visual Situations
We describe a method for performing active localization of objects in
instances of visual situations. A visual situation is an abstract
concept---e.g., "a boxing match", "a birthday party", "walking the dog",
"waiting for a bus"---whose image instantiations are linked more by their
common spatial and semantic structure than by low-level visual similarity. Our
system combines given and learned knowledge of the structure of a particular
situation, and adapts that knowledge to a new situation instance as it actively
searches for objects. More specifically, the system learns a set of probability
distributions describing spatial and other relationships among relevant
objects. The system uses those distributions to iteratively sample object
proposals on a test image, but also continually uses information from those
object proposals to adaptively modify the distributions based on what the
system has detected. We test our approach's ability to efficiently localize
objects, using a situation-specific image dataset created by our group. We
compare the results with several baselines and variations on our method, and
demonstrate the strong benefit of using situation knowledge and active
context-driven localization. Finally, we contrast our method with several other
approaches that use context as well as active search for object localization in
images.Comment: 14 page
Some Epistemic Roles for Curiosity
I start with a critical discussion of some attempts to ground epistemic normativity in curiosity. Then I develop three positive proposals. The first of these proposals is more or less purely philosophical; the second two reside at the interdisciplinary borderline between philosophy and psychology. The proposals are independent and rooted in different literatures. Readers uninterested in the first proposal (and the critical discussion preceding it) may nonetheless be interested in the second two proposals, and vice versa.
The proposals are as follows. First I argue that, among several ways in which the notion of curiosity might be used to delineate significant truths from trivial ones, a particular way is the most promising. Second, I argue that curiosity has some underappreciated epistemic roles involving memory. Third, I argue that curiosity has some underappreciated epistemic roles involving coherence
Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes
The success of deep learning in computer vision is based on availability of
large annotated datasets. To lower the need for hand labeled images, virtually
rendered 3D worlds have recently gained popularity. Creating realistic 3D
content is challenging on its own and requires significant human effort. In
this work, we propose an alternative paradigm which combines real and synthetic
data for learning semantic instance segmentation and object detection models.
Exploiting the fact that not all aspects of the scene are equally important for
this task, we propose to augment real-world imagery with virtual objects of the
target category. Capturing real-world images at large scale is easy and cheap,
and directly provides real background appearances without the need for creating
complex 3D models of the environment. We present an efficient procedure to
augment real images with virtual objects. This allows us to create realistic
composite images which exhibit both realistic background appearance and a large
number of complex object arrangements. In contrast to modeling complete 3D
environments, our augmentation approach requires only a few user interactions
in combination with 3D shapes of the target object. Through extensive
experimentation, we conclude the right set of parameters to produce augmented
data which can maximally enhance the performance of instance segmentation
models. Further, we demonstrate the utility of our approach on training
standard deep models for semantic instance segmentation and object detection of
cars in outdoor driving scenes. We test the models trained on our augmented
data on the KITTI 2015 dataset, which we have annotated with pixel-accurate
ground truth, and on Cityscapes dataset. Our experiments demonstrate that
models trained on augmented imagery generalize better than those trained on
synthetic data or models trained on limited amount of annotated real data
Contextual Action Recognition with R*CNN
There are multiple cues in an image which reveal what action a person is
performing. For example, a jogger has a pose that is characteristic for
jogging, but the scene (e.g. road, trail) and the presence of other joggers can
be an additional source of information. In this work, we exploit the simple
observation that actions are accompanied by contextual cues to build a strong
action recognition system. We adapt RCNN to use more than one region for
classification while still maintaining the ability to localize the action. We
call our system R*CNN. The action-specific models and the feature maps are
trained jointly, allowing for action specific representations to emerge. R*CNN
achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other
approaches in the field by a significant margin. Last, we show that R*CNN is
not limited to action recognition. In particular, R*CNN can also be used to
tackle fine-grained tasks such as attribute classification. We validate this
claim by reporting state-of-the-art performance on the Berkeley Attributes of
People dataset
Classifying types of gesture and inferring intent
In order to infer intent from gesture, a rudimentary classification of types of gestures into five main classes is introduced. The classification is intended as a basis for incorporating the understanding of gesture into human-robot interaction (HRI). Some requirements for the operational classification of gesture by a robot interacting with humans are also suggested
- …