47,235 research outputs found
How Do Gestures Influence Thinking and Speaking? The Gesture-for-Conceptualization Hypothesis.
Peer reviewedPostprin
Developing student spatial ability with 3D software applications
This paper reports on the design of a library of software applications for the teaching and learning of spatial geometry and visual thinking. The core objective of these applications is the development of a set of dynamic microworlds, which enables (i) students to construct, observe and manipulate configurations in space, (ii) students to study different solids and relates them to their corresponding nets, and (iii) students to promote their visualization skills through the process of constructing dynamic visual images. During the developmental process of software applications the key elements of spatial ability and visualization (mental images, external representations, processes, and abilities of visualization) are carefully taken into consideration
ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids
We introduce an unsupervised feature learning approach that embeds 3D shape
information into a single-view image representation. The main idea is a
self-supervised training objective that, given only a single 2D image, requires
all unseen views of the object to be predictable from learned features. We
implement this idea as an encoder-decoder convolutional neural network. The
network maps an input image of an unknown category and unknown viewpoint to a
latent space, from which a deconvolutional decoder can best "lift" the image to
its complete viewgrid showing the object from all viewing angles. Our
class-agnostic training procedure encourages the representation to capture
fundamental shape primitives and semantic regularities in a data-driven
manner---without manual semantic labels. Our results on two widely-used shape
datasets show 1) our approach successfully learns to perform "mental rotation"
even for objects unseen during training, and 2) the learned latent space is a
powerful representation for object recognition, outperforming several existing
unsupervised feature learning methods.Comment: To appear at ECCV 201
Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process
We thank Lucy Foulkes, Rachel Furness, Valentina Lee, and Zeshu Shao for their help with data collection; Paraskevi Argyriou for her help with reliability checks of gesture coding; and Agnieszka Konopka and Josje Praamstra for their help with proofreading this article.Peer reviewedPostprin
TEST: A Tropic, Embodied, and Situated Theory of Cognition
TEST is a novel taxonomy of knowledge representations based on three distinct hierarchically organized representational features: Tropism, Embodiment, and Situatedness. Tropic representational features reflect constraints of the physical world on the agent’s ability to form, reactivate, and enrich embodied (i.e., resulting from the agent’s bodily constraints) conceptual representations embedded in situated contexts. The proposed hierarchy entails that representations can, in principle, have tropic features without necessarily having situated and/or embodied features. On the other hand, representations that are situated and/or embodied are likely to be simultaneously tropic. Hence while we propose tropism as the most general term, the hierarchical relationship between embodiment and situatedness is more on a par, such that the dominance of one component over the other relies on the distinction between offline storage vs. online generation as well as on representation-specific properties
"Mental Rotation" by Optimizing Transforming Distance
The human visual system is able to recognize objects despite transformations
that can drastically alter their appearance. To this end, much effort has been
devoted to the invariance properties of recognition systems. Invariance can be
engineered (e.g. convolutional nets), or learned from data explicitly (e.g.
temporal coherence) or implicitly (e.g. by data augmentation). One idea that
has not, to date, been explored is the integration of latent variables which
permit a search over a learned space of transformations. Motivated by evidence
that people mentally simulate transformations in space while comparing
examples, so-called "mental rotation", we propose a transforming distance.
Here, a trained relational model actively transforms pairs of examples so that
they are maximally similar in some feature space yet respect the learned
transformational constraints. We apply our method to nearest-neighbour problems
on the Toronto Face Database and NORB
Action in cognition: the case of language
Empirical research has shown that the processing of words and sentences is accompanied by activation of the brain's motor system in language users. The degree of precision observed in this activation seems to be contingent upon (1) the meaning of a linguistic construction and (2) the depth with which readers process that construction. In addition, neurological evidence shows a correspondence between a disruption in the neural correlates of overt action and the disruption of semantic processing of language about action. These converging lines of evidence can be taken to support the hypotheses that motor processes (1) are recruited to understand language that focuses on actions and (2) contribute a unique element to conceptual representation. This article explores the role of this motor recruitment in language comprehension. It concludes that extant findings are consistent with the theorized existence of multimodal, embodied representations of the referents of words and the meaning carried by language. Further, an integrative conceptualization of “fault tolerant comprehension” is proposed
Pointing as an Instrumental Gesture : Gaze Representation Through Indication
The research of the first author was supported by a Fulbright Visiting Scholar Fellowship and developed in 2012 during a period of research visit at the University of Memphis.Peer reviewedPublisher PD
- …