5 research outputs found

    The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking

    Get PDF
    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech

    Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    Get PDF
    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solvin

    Toward a more embedded/extended perspective on the cognitive function of gestures

    Get PDF
    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures

    Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion

    Get PDF
    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability. This pre-registered study (https://www.osf.io/9uh6q/) was designed to explore how gestures affect memory for sensorimotor information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’ heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines

    Co-thought gesturing supports more complex problem solving in subjects with lower visual working-memory capacity

    Get PDF
    During silent problem solving, hand gestures arise that have no communicative intent. The role of such co-thought gestures in cognition has been understudied in cognitive research as compared to co-speech gestures. We investigated whether gesticulation during silent problem solving supported subsequent performance in a Tower of Hanoi problem-solving task, in relation to visual working-memory capacity and task complexity. Seventy-six participants were assigned to either an instructed gesture condition or a condition that allowed them to gesture, but without explicit instructions to do so. This resulted in th
    corecore