672 research outputs found
Language comprehension warps the mirror neuron system
abstract: Is the mirror neuron system (MNS) used in language understanding? According to embodied accounts of language comprehension, understanding sentences describing actions makes use of neural mechanisms of action control, including the MNS. Consequently, repeatedly comprehending sentences describing similar actions should induce adaptation of the MNS thereby warping its use in other cognitive processes such as action recognition and prediction. To test this prediction, participants read blocks of multiple sentences where each sentence in the block described transfer of objects in a direction away or toward the reader. Following each block, adaptation was measured by having participants predict the end-point of videotaped actions. The adapting sentences disrupted prediction of actions in the same direction, but (a) only for videos of biological motion, and (b) only when the effector implied by the language (e.g., the hand) matched the videos. These findings are signatures of the MNS.View the article as published at http://journal.frontiersin.org/article/10.3389/fnhum.2013.00870/ful
Unsupervised Learning of Object Landmarks through Conditional Image Generation
We propose a method for learning landmark detectors for visual objects (such
as the eyes and the nose in a face) without any manual supervision. We cast
this as the problem of generating images that combine the appearance of the
object as seen in a first example image with the geometry of the object as seen
in a second example image, where the two examples differ by a viewpoint change
and/or an object deformation. In order to factorize appearance and geometry, we
introduce a tight bottleneck in the geometry-extraction process that selects
and distils geometry-related features. Compared to standard image generation
problems, which often use generative adversarial networks, our generation task
is conditioned on both appearance and geometry and thus is significantly less
ambiguous, to the point that adopting a simple perceptual loss formulation is
sufficient. We demonstrate that our approach can learn object landmarks from
synthetic image deformations or videos, all without manual supervision, while
outperforming state-of-the-art unsupervised landmark detectors. We further show
that our method is applicable to a large variety of datasets - faces, people,
3D objects, and digits - without any modifications.Comment: In NeurIPS 2018. Project page:
http://www.robots.ox.ac.uk/~vgg/research/unsupervised_landmarks
- …