129 research outputs found
The Neurological Traces of Look-Alike Avatars
We designed an observational study where participants (n = 17) were exposed
to pictures and look-alike avatars pictures of themselves, a familiar friend or an
unfamiliar person. By measuring participantsâ brain activity with electroencephalography
(EEG), we found face-recognition event related potentials (ERPs) in the visual cortex,
around 200â250 ms, to be prominent for the different familiarity levels. A less positive
component was found for self-recognized pictures (P200) than pictures of others,
showing similar effects in both real faces and look-alike avatars. A rapid adaptation in
the same component was found when comparing the neural processing of avatar faces
vs. real faces, as if avatars in general were assimilated as real face representations
over time. ERP results also showed that in the case of the self-avatar, the P200
component correlated with more complex conscious encodings of self-representation,
i.e., the difference in voltage in the P200 between the self-avatar and the self-picture
was reduced in participants that felt the avatar looked like them. This study is put into
context within the literature of self-recognition and face recognition in the visual cortex.
Additionally, the implications of these results on look-alike avatars are discussed both
for future virtual reality (VR) and neuroscience studies
Enriching footsteps sounds in gait rehabilitation in chronic stroke patients: a pilot study
In the context of neurorehabilitation, sound is being increasingly applied for facilitating sensorimotor learning. In this study, we aimed to test the potential value of auditory stimulation for improving gait in chronic stroke patients by inducing alterations of the frequency spectra of walking sounds via a sound system that selectively amplifies and equalizes the signal in order to produce distorted auditory feedback. Twentyâtwo patients with lower extremity paresis were exposed to realâtime alterations of their footstep sounds while walking. Changes in body perception, emotion, and gait were quantified. Our results suggest that by altering footsteps sounds, several gait parameters can be modified in terms of leftâright foot asymmetry. We observed that augmenting lowâfrequency bands or amplifying the natural walking sounds led to a reduction in the asymmetry index of stance and stride times, whereas it inverted the asymmetry pattern in heelâground exerted force. By contrast, augmenting highâfrequency bands led to opposite results. These gait changes might be related to updating of internal forward models, signaling the need for adjustment of the motor system to reduce the perceived discrepancies between predictedâactual sensory feedbacks. Our findings may have the potential to enhance gait awareness in stroke patients and other clinical conditions, supporting gait rehabilitation
Violating body movement semantics: Neural signatures of self-generated and external-generated errors
How do we recognize ourselves as the agents of our actions? Do we use the same error detection mechanisms to monitor self-generated vs. externally imposed actions? Using event-related brain potentials (ERPs), we identified two different error-monitoring loops involved in providing a coherent sense of the agency of our actions. In the first ERP experiment, the participants were embodied in a virtual body (avatar) while performing an error-prone fast reaction time task. Crucially, in certain trials, participants were deceived regarding their own actions, i.e., the avatar movement did not match the participant's movement. Self-generated real errors and false (avatar) errors showed very different ERP signatures and with different processing latencies: while real errors showed a classical frontal-central error-related negativity (Ne/ERN), peaking 100 ms after error commission, false errors elicited a larger and delayed parietal negative component (at about 350â400 ms). The violation of the sense of agency elicited by false avatar errors showed a strong similarity to ERP signatures related to semantic or conceptual violations (N400 component). In a follow-up ERP control experiment, a subset of the same participants merely acted as observers of the avatar correct and error movements. This experimental situation did not elicit the N400 component associated with agency violation. Thus, the results show a clear neural dissociation between internal and external error-monitoring loops responsible for distinguishing our self-generated errors from those imposed externally, opening new avenues for the study of the mental processes underlying the integration of internal and sensory feedback information while being actors of our own actions
Recommended from our members
Where, When and Why Brain Activation Differs for Bilinguals and Monolinguals during Picture Naming and Reading Aloud
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language
Logopenic and nonfluent variants of primary progressive aphasia are differentiated by acoustic measures of speech production
Differentiation of logopenic (lvPPA) and nonfluent/agrammatic (nfvPPA) variants of Primary Progressive Aphasia is important yet remains challenging since it hinges on expert based evaluation of speech and language production. In this study acoustic measures of speech in conjunction with voxel-based morphometry were used to determine the success of the measures as an adjunct to diagnosis and to explore the neural basis of apraxia of speech in nfvPPA. Forty-one patients (21 lvPPA, 20 nfvPPA) were recruited from a consecutive sample with suspected frontotemporal dementia. Patients were diagnosed using the current gold-standard of expert perceptual judgment, based on presence/absence of particular speech features during speaking tasks. Seventeen healthy age-matched adults served as controls. MRI scans were available for 11 control and 37 PPA cases; 23 of the PPA cases underwent amyloid ligand PET imaging. Measures, corresponding to perceptual features of apraxia of speech, were periods of silence during reading and relative vowel duration and intensity in polysyllable word repetition. Discriminant function analyses revealed that a measure of relative vowel duration differentiated nfvPPA cases from both control and lvPPA cases (r2 = 0.47) with 88% agreement with expert judgment of presence of apraxia of speech in nfvPPA cases. VBM analysis showed that relative vowel duration covaried with grey matter intensity in areas critical for speech motor planning and programming: precentral gyrus, supplementary motor area and inferior frontal gyrus bilaterally, only affected in the nfvPPA group. This bilateral involvement of frontal speech networks in nfvPPA potentially affects access to compensatory mechanisms involving right hemisphere homologues. Measures of silences during reading also discriminated the PPA and control groups, but did not increase predictive accuracy. Findings suggest that a measure of relative vowel duration from of a polysyllable word repetition task may be sufficient for detecting most cases of apraxia of speech and distinguishing between nfvPPA and lvPPA
Infant Rule Learning: Advantage Language, or Advantage Speech?
<div><p>Infants appear to learn abstract rule-like regularities (e.g., <em>la la da</em> follows an AAB pattern) more easily from speech than from a variety of other auditory and visual stimuli (Marcus et al., 2007). We test if that facilitation reflects a specialization to learn from speech alone, or from modality-independent communicative stimuli more generally, by measuring 7.5-month-old infantsâ ability to learn abstract rules from sign language-like gestures. Whereas infants appear to easily learn many different rules from speech, we found that with sign-like stimuli, and under circumstances comparable to those of Marcus et al. (1999), hearing infants were able to learn an ABB rule, but not an AAB rule. This is consistent with results of studies that demonstrate lower levels of infant rule learning from a variety of other non-speech stimuli, and we discuss implications for accounts of speech-facilitation.</p> </div
High-definition tDCS of the temporo-parietal cortex enhances access to newly learned words
Learning associations between words and their referents is crucial for language learning in the developing and adult brain and for language re-learning after neurological injury. Non-invasive transcranial direct current stimulation (tDCS) to the posterior temporo-parietal cortex has been suggested to enhance this process. However, previous studies employed standard tDCS set-ups that induce diffuse current flow in the brain, preventing the attribution of stimulation effects to the target region. This study employed high-definition tDCS (HD-tDCS) that allowed the current flow to be constrained to the temporo-parietal cortex, to clarify its role in novel word learning. In a sham-controlled, double-blind, between-subjects design, 50 healthy adults learned associations between legal non-words and unfamiliar object pictures. Participants were stratified by baseline learning ability on a short version of the learning paradigm and pairwise randomized to active (20 mins; N = 25) or sham (40 seconds; N = 25) HD-tDCS. Accuracy was comparable during the baseline and experimental phases in both HD-tDCS conditions. However, active HD-tDCS resulted in faster retrieval of correct word-picture pairs. Our findings corroborate the critical role of the temporo-parietal cortex in novel word learning, which has implications for current theories of language acquisition
Direct effects of diazepam on emotional processing in healthy volunteers
RATIONALE: Pharmacological agents used in the treatment of anxiety have been reported to decrease threat relevant processing in patients and healthy controls, suggesting a potentially relevant mechanism of action. However, the effects of the anxiolytic diazepam have typically been examined at sedative doses, which do not allow the direct actions on emotional processing to be fully separated from global effects of the drug on cognition and alertness. OBJECTIVES: The aim of this study was to investigate the effect of a lower, but still clinically effective, dose of diazepam on emotional processing in healthy volunteers. MATERIALS AND METHODS: Twenty-four participants were randomised to receive a single dose of diazepam (5 mg) or placebo. Sixty minutes later, participants completed a battery of psychological tests, including measures of non-emotional cognitive performance (reaction time and sustained attention) and emotional processing (affective modulation of the startle reflex, attentional dot probe, facial expression recognition, and emotional memory). Mood and subjective experience were also measured. RESULTS: Diazepam significantly modulated attentional vigilance to masked emotional faces and significantly decreased overall startle reactivity. Diazepam did not significantly affect mood, alertness, response times, facial expression recognition, or sustained attention. CONCLUSIONS: At non-sedating doses, diazepam produces effects on attentional vigilance and startle responsivity that are consistent with its anxiolytic action. This may be an underlying mechanism through which benzodiazepines exert their therapeutic effects in clinical anxiety
Phrase Frequency Effects in Language Production
A classic debate in the psychology of language concerns the question of the grain-size of the linguistic information that is stored in memory. One view is that only morphologically simple forms are stored (e.g., âcarâ, âredâ), and that more complex forms of language such as multi-word phrases (e.g., âred carâ) are generated on-line from the simple forms. In two experiments we tested this view. In Experiment 1, participants produced noun+adjective and noun+noun phrases that were elicited by experimental displays consisting of colored line drawings and two superimposed line drawings. In Experiment 2, participants produced noun+adjective and determiner+noun+adjective utterances elicited by colored line drawings. In both experiments, naming latencies decreased with increasing frequency of the multi-word phrase, and were unaffected by the frequency of the object name in the utterance. These results suggest that the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words
- âŚ