60 research outputs found

    A database of whole-body action videos for the study of action, emotion, and untrustworthiness

    Get PDF
    We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions—walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting—while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Activity in ventral premotor cortex is modulated by vision of own hand in action

    Get PDF
    Parietal and premotor cortices of the macaque monkey contain distinct populations of neurons which, in addition to their motor discharge, are also activated by visual stimulation. Among these visuomotor neurons, a population of grasping neurons located in the anterior intraparietal area (AIP) shows discharge modulation when the own hand is visible during object grasping. Given the dense connections between AIP and inferior frontal regions, we aimed at investigating whether two hand-related frontal areas, ventral premotor area F5 and primary motor cortex (area F1), contain neurons with similar properties. Two macaques were involved in a grasping task executed in various light/dark conditions in which the to-be-grasped object was kept visible by a dim retro-illumination. Approximately 62% of F5 and 55% of F1 motor neurons showed light/dark modulations. To better isolate the effect of hand-related visual input, we introduced two further conditions characterized by kinematic features similar to the dark condition. The scene was briefly illuminated (i) during hand preshaping (pre-touch flash, PT-flash) and (ii) at hand-object contact (touch flash, T-flash). Approximately 48% of F5 and 44% of F1 motor neurons showed a flash-related modulation. Considering flash-modulated neurons in the two flash conditions, ∼40% from F5 and ∼52% from F1 showed stronger activity in PT- than T-flash (PT-flash-dominant), whereas ∼60% from F5 and ∼48% from F1 showed stronger activity in T- than PT-flash (T-flash-dominant). Furthermore, F5, but not F1, flash-dominant neurons were characterized by a higher peak and mean discharge in the preferred flash condition as compared to light and dark conditions. Still considering F5, the distribution of the time of peak discharge was similar in light and preferred flash conditions. This study shows that the frontal cortex contains neurons, previously classified as motor neurons, which are sensitive to the observation of meaningful phases of the own grasping action. We conclude by discussing the possible functional role of these populations

    Modeling the Development of Goal-Specificity in Mirror Neurons

    Get PDF
    Neurophysiological studies have shown that parietal mirror neurons encode not only actions but also the goal of these actions. Although some mirror neurons will fire whenever a certain action is perceived (goal-independently), most will only fire if the motion is perceived as part of an action with a specific goal. This result is important for the action-understanding hypothesis as it provides a potential neurological basis for such a cognitive ability. It is also relevant for the design of artificial cognitive systems, in particular robotic systems that rely on computational models of the mirror system in their interaction with other agents. Yet, to date, no computational model has explicitly addressed the mechanisms that give rise to both goal-specific and goal-independent parietal mirror neurons. In the present paper, we present a computational model based on a self-organizing map, which receives artificial inputs representing information about both the observed or executed actions and the context in which they were executed. We show that the map develops a biologically plausible organization in which goal-specific mirror neurons emerge. We further show that the fundamental cause for both the appearance and the number of goal-specific neurons can be found in geometric relationships between the different inputs to the map. The results are important to the action-understanding hypothesis as they provide a mechanism for the emergence of goal-specific parietal mirror neurons and lead to a number of predictions: (1) Learning of new goals may mostly reassign existing goal-specific neurons rather than recruit new ones; (2) input differences between executed and observed actions can explain observed corresponding differences in the number of goal-specific neurons; and (3) the percentage of goal-specific neurons may differ between motion primitives

    The Use of Phonetic Motor Invariants Can Improve Automatic Phoneme Discrimination

    Get PDF
    affiliation: Castellini, C (Reprint Author), Univ Genoa, LIRA Lab, Genoa, Italy. Castellini, Claudio; Metta, Giorgio; Tavella, Michele, Univ Genoa, LIRA Lab, Genoa, Italy. Badino, Leonardo; Metta, Giorgio; Sandini, Giulio; Fadiga, Luciano, Italian Inst Technol, Genoa, Italy. Grimaldi, Mirko, Salento Univ, CRIL, Lecce, Italy. Fadiga, Luciano, Univ Ferrara, DSBTA, I-44100 Ferrara, Italy. article-number: e24055 keywords-plus: SPEECH-PERCEPTION; RECOGNITION research-areas: Science & Technology - Other Topics web-of-science-categories: Multidisciplinary Sciences author-email: [email protected] funding-acknowledgement: European Commission [NEST-5010, FP7-IST-250026] funding-text: The authors acknowledge the support of the European Commission project CONTACT (grant agreement NEST-5010) and SIEMPRE (grant agreement number FP7-IST-250026). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. number-of-cited-references: 31 times-cited: 0 journal-iso: PLoS One doc-delivery-number: 817OO unique-id: ISI:000294683900024We investigate the use of phonetic motor invariants (MIs), that is, recurring kinematic patterns of the human phonetic articulators, to improve automatic phoneme discrimination. Using a multi-subject database of synchronized speech and lips/tongue trajectories, we first identify MIs commonly associated with bilabial and dental consonants, and use them to simultaneously segment speech and motor signals. We then build a simple neural network-based regression schema (called Audio-Motor Map, AMM) mapping audio features of these segments to the corresponding MIs. Extensive experimental results show that (a) a small set of features extracted from the MIs, as originally gathered from articulatory sensors, are dramatically more effective than a large, state-of-the-art set of audio features, in automatically discriminating bilabials from dentals; (b) the same features, extracted from AMM-reconstructed MIs, are as effective as or better than the audio features, when testing across speakers and coarticulating phonemes; and dramatically better as noise is added to the speech signal. These results seem to support some of the claims of the motor theory of speech perception and add experimental evidence of the actual usefulness of MIs in the more general framework of automated speech recognition

    The cognitive neuroscience of prehension: recent developments

    Get PDF
    Prehension, the capacity to reach and grasp, is the key behavior that allows humans to change their environment. It continues to serve as a remarkable experimental test case for probing the cognitive architecture of goal-oriented action. This review focuses on recent experimental evidence that enhances or modifies how we might conceptualize the neural substrates of prehension. Emphasis is placed on studies that consider how precision grasps are selected and transformed into motor commands. Then, the mechanisms that extract action relevant information from vision and touch are considered. These include consideration of how parallel perceptual networks within parietal cortex, along with the ventral stream, are connected and share information to achieve common motor goals. On-line control of grasping action is discussed within a state estimation framework. The review ends with a consideration about how prehension fits within larger action repertoires that solve more complex goals and the possible cortical architectures needed to organize these actions

    Visual attention and action: How cueing, direct mapping, and social interactions drive orienting

    Get PDF
    Despite considerable interest in both action perception and social attention over the last 2 decades, there has been surprisingly little investigation concerning how the manual actions of other humans orient visual attention. The present review draws together studies that have measured the orienting of attention, following observation of another’s goal-directed action. Our review proposes that, in line with the literature on eye gaze, action is a particularly strong orienting cue for the visual system. However, we additionally suggest that action may orient visual attention using mechanisms, which gaze direction does not (i.e., neural direct mapping and corepresentation). Finally, we review the implications of these gaze-independent mechanisms for the study of attention to action. We suggest that our understanding of attention to action may benefit from being studied in the context of joint action paradigms, where the role of higher level action goals and social factors can be investigated
    corecore